00:00:00.001  Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1065
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3732
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.001    Started by timer
00:00:00.067  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.067  The recommended git tool is: git
00:00:00.067  using credential 00000000-0000-0000-0000-000000000002
00:00:00.070   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.091  Fetching changes from the remote Git repository
00:00:00.093   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.130  Using shallow fetch with depth 1
00:00:00.130  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.130   > git --version # timeout=10
00:00:00.161   > git --version # 'git version 2.39.2'
00:00:00.161  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.183  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.183   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:06.508   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:06.518   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:06.529  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:06.529   > git config core.sparsecheckout # timeout=10
00:00:06.538   > git read-tree -mu HEAD # timeout=10
00:00:06.554   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:06.571  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:06.571   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:06.646  [Pipeline] Start of Pipeline
00:00:06.659  [Pipeline] library
00:00:06.661  Loading library shm_lib@master
00:00:06.662  Library shm_lib@master is cached. Copying from home.
00:00:06.677  [Pipeline] node
00:00:06.691  Running on WFP45 in /var/jenkins/workspace/nvme-phy-autotest
00:00:06.693  [Pipeline] {
00:00:06.703  [Pipeline] catchError
00:00:06.705  [Pipeline] {
00:00:06.715  [Pipeline] wrap
00:00:06.722  [Pipeline] {
00:00:06.727  [Pipeline] stage
00:00:06.728  [Pipeline] { (Prologue)
00:00:06.929  [Pipeline] sh
00:00:07.213  + logger -p user.info -t JENKINS-CI
00:00:07.231  [Pipeline] echo
00:00:07.233  Node: WFP45
00:00:07.242  [Pipeline] sh
00:00:07.539  [Pipeline] setCustomBuildProperty
00:00:07.551  [Pipeline] echo
00:00:07.553  Cleanup processes
00:00:07.558  [Pipeline] sh
00:00:07.841  + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:07.841  829045 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:07.854  [Pipeline] sh
00:00:08.136  ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:08.136  ++ grep -v 'sudo pgrep'
00:00:08.136  ++ awk '{print $1}'
00:00:08.136  + sudo kill -9
00:00:08.136  + true
00:00:08.149  [Pipeline] cleanWs
00:00:08.158  [WS-CLEANUP] Deleting project workspace...
00:00:08.158  [WS-CLEANUP] Deferred wipeout is used...
00:00:08.164  [WS-CLEANUP] done
00:00:08.167  [Pipeline] setCustomBuildProperty
00:00:08.180  [Pipeline] sh
00:00:08.460  + sudo git config --global --replace-all safe.directory '*'
00:00:08.561  [Pipeline] httpRequest
00:00:08.935  [Pipeline] echo
00:00:08.937  Sorcerer 10.211.164.20 is alive
00:00:08.946  [Pipeline] retry
00:00:08.947  [Pipeline] {
00:00:08.960  [Pipeline] httpRequest
00:00:08.964  HttpMethod: GET
00:00:08.965  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.965  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.982  Response Code: HTTP/1.1 200 OK
00:00:08.982  Success: Status code 200 is in the accepted range: 200,404
00:00:08.983  Saving response body to /var/jenkins/workspace/nvme-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:15.036  [Pipeline] }
00:00:15.055  [Pipeline] // retry
00:00:15.063  [Pipeline] sh
00:00:15.349  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:15.365  [Pipeline] httpRequest
00:00:15.770  [Pipeline] echo
00:00:15.772  Sorcerer 10.211.164.20 is alive
00:00:15.782  [Pipeline] retry
00:00:15.784  [Pipeline] {
00:00:15.799  [Pipeline] httpRequest
00:00:15.803  HttpMethod: GET
00:00:15.804  URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:15.804  Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:15.820  Response Code: HTTP/1.1 200 OK
00:00:15.821  Success: Status code 200 is in the accepted range: 200,404
00:00:15.821  Saving response body to /var/jenkins/workspace/nvme-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:53.337  [Pipeline] }
00:00:53.354  [Pipeline] // retry
00:00:53.361  [Pipeline] sh
00:00:53.646  + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:57.851  [Pipeline] sh
00:00:58.136  + git -C spdk log --oneline -n5
00:00:58.136  c13c99a5e test: Various fixes for Fedora40
00:00:58.136  726a04d70 test/nvmf: adjust timeout for bigger nvmes
00:00:58.136  61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11
00:00:58.136  7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched
00:00:58.136  ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges
00:00:58.154  [Pipeline] withCredentials
00:00:58.164   > git --version # timeout=10
00:00:58.175   > git --version # 'git version 2.39.2'
00:00:58.193  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:00:58.195  [Pipeline] {
00:00:58.204  [Pipeline] retry
00:00:58.206  [Pipeline] {
00:00:58.221  [Pipeline] sh
00:00:58.505  + git ls-remote http://dpdk.org/git/dpdk-stable v23.11
00:00:59.898  [Pipeline] }
00:00:59.914  [Pipeline] // retry
00:00:59.919  [Pipeline] }
00:00:59.935  [Pipeline] // withCredentials
00:00:59.944  [Pipeline] httpRequest
00:01:00.317  [Pipeline] echo
00:01:00.319  Sorcerer 10.211.164.20 is alive
00:01:00.328  [Pipeline] retry
00:01:00.330  [Pipeline] {
00:01:00.344  [Pipeline] httpRequest
00:01:00.348  HttpMethod: GET
00:01:00.349  URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:00.349  Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:00.359  Response Code: HTTP/1.1 200 OK
00:01:00.359  Success: Status code 200 is in the accepted range: 200,404
00:01:00.359  Saving response body to /var/jenkins/workspace/nvme-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:23.358  [Pipeline] }
00:01:23.375  [Pipeline] // retry
00:01:23.382  [Pipeline] sh
00:01:23.667  + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz
00:01:25.060  [Pipeline] sh
00:01:25.345  + git -C dpdk log --oneline -n5
00:01:25.345  eeb0605f11 version: 23.11.0
00:01:25.345  238778122a doc: update release notes for 23.11
00:01:25.345  46aa6b3cfc doc: fix description of RSS features
00:01:25.345  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:01:25.345  7e421ae345 devtools: support skipping forbid rule check
00:01:25.355  [Pipeline] }
00:01:25.368  [Pipeline] // stage
00:01:25.377  [Pipeline] stage
00:01:25.379  [Pipeline] { (Prepare)
00:01:25.398  [Pipeline] writeFile
00:01:25.413  [Pipeline] sh
00:01:25.697  + logger -p user.info -t JENKINS-CI
00:01:25.728  [Pipeline] sh
00:01:26.031  + logger -p user.info -t JENKINS-CI
00:01:26.041  [Pipeline] sh
00:01:26.325  + cat autorun-spdk.conf
00:01:26.325  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:26.325  SPDK_TEST_IOAT=1
00:01:26.325  SPDK_TEST_NVME=1
00:01:26.325  SPDK_TEST_NVME_CLI=1
00:01:26.325  SPDK_TEST_OCF=1
00:01:26.325  SPDK_RUN_UBSAN=1
00:01:26.325  SPDK_TEST_NVME_CUSE=1
00:01:26.325  SPDK_TEST_SCHEDULER=1
00:01:26.325  SPDK_TEST_ACCEL=1
00:01:26.325  SPDK_TEST_NATIVE_DPDK=v23.11
00:01:26.325  SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:26.332  RUN_NIGHTLY=1
00:01:26.337  [Pipeline] readFile
00:01:26.358  [Pipeline] withEnv
00:01:26.360  [Pipeline] {
00:01:26.371  [Pipeline] sh
00:01:26.656  + set -ex
00:01:26.656  + [[ -f /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf ]]
00:01:26.656  + source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf
00:01:26.656  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:26.656  ++ SPDK_TEST_IOAT=1
00:01:26.656  ++ SPDK_TEST_NVME=1
00:01:26.656  ++ SPDK_TEST_NVME_CLI=1
00:01:26.656  ++ SPDK_TEST_OCF=1
00:01:26.656  ++ SPDK_RUN_UBSAN=1
00:01:26.656  ++ SPDK_TEST_NVME_CUSE=1
00:01:26.656  ++ SPDK_TEST_SCHEDULER=1
00:01:26.656  ++ SPDK_TEST_ACCEL=1
00:01:26.656  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:01:26.656  ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:26.656  ++ RUN_NIGHTLY=1
00:01:26.656  + case $SPDK_TEST_NVMF_NICS in
00:01:26.656  + DRIVERS=
00:01:26.656  + [[ -n '' ]]
00:01:26.656  + exit 0
00:01:26.666  [Pipeline] }
00:01:26.680  [Pipeline] // withEnv
00:01:26.685  [Pipeline] }
00:01:26.698  [Pipeline] // stage
00:01:26.706  [Pipeline] catchError
00:01:26.708  [Pipeline] {
00:01:26.721  [Pipeline] timeout
00:01:26.721  Timeout set to expire in 40 min
00:01:26.723  [Pipeline] {
00:01:26.736  [Pipeline] stage
00:01:26.738  [Pipeline] { (Tests)
00:01:26.752  [Pipeline] sh
00:01:27.037  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvme-phy-autotest
00:01:27.037  ++ readlink -f /var/jenkins/workspace/nvme-phy-autotest
00:01:27.037  + DIR_ROOT=/var/jenkins/workspace/nvme-phy-autotest
00:01:27.037  + [[ -n /var/jenkins/workspace/nvme-phy-autotest ]]
00:01:27.037  + DIR_SPDK=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:01:27.037  + DIR_OUTPUT=/var/jenkins/workspace/nvme-phy-autotest/output
00:01:27.037  + [[ -d /var/jenkins/workspace/nvme-phy-autotest/spdk ]]
00:01:27.037  + [[ ! -d /var/jenkins/workspace/nvme-phy-autotest/output ]]
00:01:27.037  + mkdir -p /var/jenkins/workspace/nvme-phy-autotest/output
00:01:27.037  + [[ -d /var/jenkins/workspace/nvme-phy-autotest/output ]]
00:01:27.038  + [[ nvme-phy-autotest == pkgdep-* ]]
00:01:27.038  + cd /var/jenkins/workspace/nvme-phy-autotest
00:01:27.038  + source /etc/os-release
00:01:27.038  ++ NAME='Fedora Linux'
00:01:27.038  ++ VERSION='39 (Cloud Edition)'
00:01:27.038  ++ ID=fedora
00:01:27.038  ++ VERSION_ID=39
00:01:27.038  ++ VERSION_CODENAME=
00:01:27.038  ++ PLATFORM_ID=platform:f39
00:01:27.038  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:01:27.038  ++ ANSI_COLOR='0;38;2;60;110;180'
00:01:27.038  ++ LOGO=fedora-logo-icon
00:01:27.038  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:01:27.038  ++ HOME_URL=https://fedoraproject.org/
00:01:27.038  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:01:27.038  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:01:27.038  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:01:27.038  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:01:27.038  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:01:27.038  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:01:27.038  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:01:27.038  ++ SUPPORT_END=2024-11-12
00:01:27.038  ++ VARIANT='Cloud Edition'
00:01:27.038  ++ VARIANT_ID=cloud
00:01:27.038  + uname -a
00:01:27.038  Linux spdk-wfp-45 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:01:27.038  + sudo /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status
00:01:29.639  Hugepages
00:01:29.639  node     hugesize     free /  total
00:01:29.639  node0   1048576kB        0 /      0
00:01:29.639  node0      2048kB        0 /      0
00:01:29.639  node1   1048576kB        0 /      0
00:01:29.639  node1      2048kB        0 /      0
00:01:29.639  
00:01:29.639  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:29.639  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:01:29.639  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:01:29.898  NVMe                      0000:5e:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:01:29.898  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:01:29.898  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:01:29.898  + rm -f /tmp/spdk-ld-path
00:01:29.898  + source autorun-spdk.conf
00:01:29.898  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:29.898  ++ SPDK_TEST_IOAT=1
00:01:29.898  ++ SPDK_TEST_NVME=1
00:01:29.898  ++ SPDK_TEST_NVME_CLI=1
00:01:29.898  ++ SPDK_TEST_OCF=1
00:01:29.898  ++ SPDK_RUN_UBSAN=1
00:01:29.898  ++ SPDK_TEST_NVME_CUSE=1
00:01:29.898  ++ SPDK_TEST_SCHEDULER=1
00:01:29.898  ++ SPDK_TEST_ACCEL=1
00:01:29.898  ++ SPDK_TEST_NATIVE_DPDK=v23.11
00:01:29.898  ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:29.898  ++ RUN_NIGHTLY=1
00:01:29.898  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:29.898  + [[ -n '' ]]
00:01:29.898  + sudo git config --global --add safe.directory /var/jenkins/workspace/nvme-phy-autotest/spdk
00:01:29.898  + for M in /var/spdk/build-*-manifest.txt
00:01:29.898  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:01:29.898  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/
00:01:29.898  + for M in /var/spdk/build-*-manifest.txt
00:01:29.898  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:29.898  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/
00:01:29.898  + for M in /var/spdk/build-*-manifest.txt
00:01:29.898  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:29.898  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/
00:01:29.898  ++ uname
00:01:29.898  + [[ Linux == \L\i\n\u\x ]]
00:01:29.898  + sudo dmesg -T
00:01:29.898  + sudo dmesg --clear
00:01:29.898  + dmesg_pid=829931
00:01:29.898  + [[ Fedora Linux == FreeBSD ]]
00:01:29.898  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:29.898  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:29.898  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:29.898  + [[ -x /usr/src/fio-static/fio ]]
00:01:29.898  + export FIO_BIN=/usr/src/fio-static/fio
00:01:29.898  + FIO_BIN=/usr/src/fio-static/fio
00:01:29.898  + sudo dmesg -Tw
00:01:29.898  + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\e\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:29.898  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:29.898  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:29.898  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:29.898  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:01:29.898  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:29.898  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:29.898  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:01:29.898  + spdk/autorun.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf
00:01:29.898  Test configuration:
00:01:29.898  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:29.898  SPDK_TEST_IOAT=1
00:01:29.898  SPDK_TEST_NVME=1
00:01:29.898  SPDK_TEST_NVME_CLI=1
00:01:29.898  SPDK_TEST_OCF=1
00:01:29.898  SPDK_RUN_UBSAN=1
00:01:29.898  SPDK_TEST_NVME_CUSE=1
00:01:29.898  SPDK_TEST_SCHEDULER=1
00:01:29.898  SPDK_TEST_ACCEL=1
00:01:29.898  SPDK_TEST_NATIVE_DPDK=v23.11
00:01:29.898  SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:30.157  RUN_NIGHTLY=1   00:32:19	-- common/autotest_common.sh@1689 -- $ [[ n == y ]]
00:01:30.157    00:32:19	-- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:01:30.157     00:32:19	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:30.157     00:32:19	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:30.157     00:32:19	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:30.157      00:32:19	-- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:30.157      00:32:19	-- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:30.157      00:32:19	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:30.157      00:32:19	-- paths/export.sh@5 -- $ export PATH
00:01:30.157      00:32:19	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:01:30.157    00:32:19	-- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output
00:01:30.157      00:32:19	-- common/autobuild_common.sh@440 -- $ date +%s
00:01:30.157     00:32:19	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734391939.XXXXXX
00:01:30.157    00:32:19	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734391939.xvCmdM
00:01:30.157    00:32:19	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:01:30.157    00:32:19	-- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']'
00:01:30.157     00:32:19	-- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:30.157    00:32:19	-- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvme-phy-autotest/dpdk'
00:01:30.157    00:32:19	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp'
00:01:30.157    00:32:19	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp  --exclude /var/jenkins/workspace/nvme-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:01:30.157     00:32:19	-- common/autobuild_common.sh@456 -- $ get_config_params
00:01:30.157     00:32:19	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:01:30.157     00:32:19	-- common/autotest_common.sh@10 -- $ set +x
00:01:30.157    00:32:19	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build'
00:01:30.158   00:32:19	-- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:30.158   00:32:19	-- spdk/autobuild.sh@12 -- $ umask 022
00:01:30.158   00:32:19	-- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:01:30.158   00:32:19	-- spdk/autobuild.sh@16 -- $ date -u
00:01:30.158  Mon Dec 16 11:32:19 PM UTC 2024
00:01:30.158   00:32:19	-- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:30.158  LTS-67-gc13c99a5e
00:01:30.158   00:32:19	-- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:01:30.158   00:32:19	-- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:30.158   00:32:19	-- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:30.158   00:32:19	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:01:30.158   00:32:19	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:30.158   00:32:19	-- common/autotest_common.sh@10 -- $ set +x
00:01:30.158  ************************************
00:01:30.158  START TEST ubsan
00:01:30.158  ************************************
00:01:30.158   00:32:19	-- common/autotest_common.sh@1114 -- $ echo 'using ubsan'
00:01:30.158  using ubsan
00:01:30.158  
00:01:30.158  real	0m0.000s
00:01:30.158  user	0m0.000s
00:01:30.158  sys	0m0.000s
00:01:30.158   00:32:19	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:01:30.158   00:32:19	-- common/autotest_common.sh@10 -- $ set +x
00:01:30.158  ************************************
00:01:30.158  END TEST ubsan
00:01:30.158  ************************************
00:01:30.158   00:32:19	-- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']'
00:01:30.158   00:32:19	-- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:01:30.158   00:32:19	-- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk
00:01:30.158   00:32:19	-- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']'
00:01:30.158   00:32:19	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:01:30.158   00:32:19	-- common/autotest_common.sh@10 -- $ set +x
00:01:30.158  ************************************
00:01:30.158  START TEST build_native_dpdk
00:01:30.158  ************************************
00:01:30.158   00:32:19	-- common/autotest_common.sh@1114 -- $ _build_native_dpdk
00:01:30.158   00:32:19	-- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:01:30.158   00:32:19	-- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:01:30.158   00:32:19	-- common/autobuild_common.sh@50 -- $ local compiler_version
00:01:30.158   00:32:19	-- common/autobuild_common.sh@51 -- $ local compiler
00:01:30.158   00:32:19	-- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:01:30.158   00:32:19	-- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:01:30.158   00:32:19	-- common/autobuild_common.sh@55 -- $ compiler=gcc
00:01:30.158   00:32:19	-- common/autobuild_common.sh@61 -- $ export CC=gcc
00:01:30.158   00:32:19	-- common/autobuild_common.sh@61 -- $ CC=gcc
00:01:30.158   00:32:19	-- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:01:30.158    00:32:19	-- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:01:30.158   00:32:19	-- common/autobuild_common.sh@68 -- $ compiler_version=13
00:01:30.158   00:32:19	-- common/autobuild_common.sh@69 -- $ compiler_version=13
00:01:30.158   00:32:19	-- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:30.158    00:32:19	-- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:30.158   00:32:19	-- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvme-phy-autotest/dpdk
00:01:30.158   00:32:19	-- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvme-phy-autotest/dpdk ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:01:30.158   00:32:19	-- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvme-phy-autotest/dpdk log --oneline -n 5
00:01:30.158  eeb0605f11 version: 23.11.0
00:01:30.158  238778122a doc: update release notes for 23.11
00:01:30.158  46aa6b3cfc doc: fix description of RSS features
00:01:30.158  dd88f51a57 devtools: forbid DPDK API in cnxk base driver
00:01:30.158  7e421ae345 devtools: support skipping forbid rule check
00:01:30.158   00:32:19	-- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:01:30.158   00:32:19	-- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:01:30.158   00:32:19	-- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0
00:01:30.158   00:32:19	-- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:01:30.158   00:32:19	-- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:01:30.158   00:32:19	-- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base")
00:01:30.158   00:32:19	-- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n
00:01:30.158   00:32:19	-- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]]
00:01:30.158   00:32:19	-- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/dpdk
00:01:30.158    00:32:19	-- common/autobuild_common.sh@168 -- $ uname -s
00:01:30.158   00:32:19	-- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']'
00:01:30.158   00:32:19	-- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0
00:01:30.158   00:32:19	-- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0
00:01:30.158   00:32:19	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:01:30.158   00:32:19	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:01:30.158   00:32:19	-- scripts/common.sh@335 -- $ IFS=.-:
00:01:30.158   00:32:19	-- scripts/common.sh@335 -- $ read -ra ver1
00:01:30.158   00:32:19	-- scripts/common.sh@336 -- $ IFS=.-:
00:01:30.158   00:32:19	-- scripts/common.sh@336 -- $ read -ra ver2
00:01:30.158   00:32:19	-- scripts/common.sh@337 -- $ local 'op=<'
00:01:30.158   00:32:19	-- scripts/common.sh@339 -- $ ver1_l=3
00:01:30.158   00:32:19	-- scripts/common.sh@340 -- $ ver2_l=3
00:01:30.158   00:32:19	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:01:30.158   00:32:19	-- scripts/common.sh@343 -- $ case "$op" in
00:01:30.158   00:32:19	-- scripts/common.sh@344 -- $ : 1
00:01:30.158   00:32:19	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:01:30.158   00:32:19	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:30.158    00:32:19	-- scripts/common.sh@364 -- $ decimal 23
00:01:30.158    00:32:19	-- scripts/common.sh@352 -- $ local d=23
00:01:30.158    00:32:19	-- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:01:30.158    00:32:19	-- scripts/common.sh@354 -- $ echo 23
00:01:30.158   00:32:19	-- scripts/common.sh@364 -- $ ver1[v]=23
00:01:30.158    00:32:19	-- scripts/common.sh@365 -- $ decimal 21
00:01:30.158    00:32:19	-- scripts/common.sh@352 -- $ local d=21
00:01:30.158    00:32:19	-- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:01:30.158    00:32:19	-- scripts/common.sh@354 -- $ echo 21
00:01:30.158   00:32:19	-- scripts/common.sh@365 -- $ ver2[v]=21
00:01:30.158   00:32:19	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:01:30.158   00:32:19	-- scripts/common.sh@366 -- $ return 1
00:01:30.158   00:32:19	-- common/autobuild_common.sh@173 -- $ patch -p1
00:01:30.158  patching file config/rte_config.h
00:01:30.158  Hunk #1 succeeded at 60 (offset 1 line).
00:01:30.158   00:32:19	-- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0
00:01:30.158   00:32:19	-- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0
00:01:30.158   00:32:19	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:01:30.158   00:32:19	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:01:30.158   00:32:19	-- scripts/common.sh@335 -- $ IFS=.-:
00:01:30.158   00:32:19	-- scripts/common.sh@335 -- $ read -ra ver1
00:01:30.158   00:32:19	-- scripts/common.sh@336 -- $ IFS=.-:
00:01:30.158   00:32:19	-- scripts/common.sh@336 -- $ read -ra ver2
00:01:30.158   00:32:19	-- scripts/common.sh@337 -- $ local 'op=<'
00:01:30.158   00:32:19	-- scripts/common.sh@339 -- $ ver1_l=3
00:01:30.158   00:32:19	-- scripts/common.sh@340 -- $ ver2_l=3
00:01:30.158   00:32:19	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:01:30.158   00:32:19	-- scripts/common.sh@343 -- $ case "$op" in
00:01:30.158   00:32:19	-- scripts/common.sh@344 -- $ : 1
00:01:30.158   00:32:19	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:01:30.158   00:32:19	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:30.158    00:32:19	-- scripts/common.sh@364 -- $ decimal 23
00:01:30.158    00:32:19	-- scripts/common.sh@352 -- $ local d=23
00:01:30.158    00:32:19	-- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]]
00:01:30.158    00:32:19	-- scripts/common.sh@354 -- $ echo 23
00:01:30.418   00:32:19	-- scripts/common.sh@364 -- $ ver1[v]=23
00:01:30.418    00:32:19	-- scripts/common.sh@365 -- $ decimal 24
00:01:30.418    00:32:19	-- scripts/common.sh@352 -- $ local d=24
00:01:30.418    00:32:19	-- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:01:30.418    00:32:19	-- scripts/common.sh@354 -- $ echo 24
00:01:30.418   00:32:19	-- scripts/common.sh@365 -- $ ver2[v]=24
00:01:30.418   00:32:19	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:01:30.418   00:32:19	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:01:30.418   00:32:19	-- scripts/common.sh@367 -- $ return 0
00:01:30.418   00:32:19	-- common/autobuild_common.sh@177 -- $ patch -p1
00:01:30.418  patching file lib/pcapng/rte_pcapng.c
00:01:30.418   00:32:19	-- common/autobuild_common.sh@180 -- $ dpdk_kmods=false
00:01:30.418    00:32:19	-- common/autobuild_common.sh@181 -- $ uname -s
00:01:30.418   00:32:19	-- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']'
00:01:30.418    00:32:19	-- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base
00:01:30.418   00:32:19	-- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:01:35.693  The Meson build system
00:01:35.693  Version: 1.5.0
00:01:35.693  Source dir: /var/jenkins/workspace/nvme-phy-autotest/dpdk
00:01:35.693  Build dir: /var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp
00:01:35.693  Build type: native build
00:01:35.693  Program cat found: YES (/usr/bin/cat)
00:01:35.693  Project name: DPDK
00:01:35.693  Project version: 23.11.0
00:01:35.693  C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:01:35.693  C linker for the host machine: gcc ld.bfd 2.40-14
00:01:35.693  Host machine cpu family: x86_64
00:01:35.693  Host machine cpu: x86_64
00:01:35.693  Message: ## Building in Developer Mode ##
00:01:35.693  Program pkg-config found: YES (/usr/bin/pkg-config)
00:01:35.693  Program check-symbols.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/dpdk/buildtools/check-symbols.sh)
00:01:35.693  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh)
00:01:35.693  Program python3 found: YES (/usr/bin/python3)
00:01:35.693  Program cat found: YES (/usr/bin/cat)
00:01:35.693  config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:01:35.693  Compiler for C supports arguments -march=native: YES 
00:01:35.693  Checking for size of "void *" : 8 
00:01:35.693  Checking for size of "void *" : 8 (cached)
00:01:35.693  Library m found: YES
00:01:35.693  Library numa found: YES
00:01:35.693  Has header "numaif.h" : YES 
00:01:35.693  Library fdt found: NO
00:01:35.693  Library execinfo found: NO
00:01:35.693  Has header "execinfo.h" : YES 
00:01:35.693  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:01:35.693  Run-time dependency libarchive found: NO (tried pkgconfig)
00:01:35.693  Run-time dependency libbsd found: NO (tried pkgconfig)
00:01:35.693  Run-time dependency jansson found: NO (tried pkgconfig)
00:01:35.693  Run-time dependency openssl found: YES 3.1.1
00:01:35.693  Run-time dependency libpcap found: YES 1.10.4
00:01:35.693  Has header "pcap.h" with dependency libpcap: YES 
00:01:35.693  Compiler for C supports arguments -Wcast-qual: YES 
00:01:35.693  Compiler for C supports arguments -Wdeprecated: YES 
00:01:35.693  Compiler for C supports arguments -Wformat: YES 
00:01:35.693  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:01:35.693  Compiler for C supports arguments -Wformat-security: NO 
00:01:35.693  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:35.693  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:01:35.693  Compiler for C supports arguments -Wnested-externs: YES 
00:01:35.693  Compiler for C supports arguments -Wold-style-definition: YES 
00:01:35.693  Compiler for C supports arguments -Wpointer-arith: YES 
00:01:35.693  Compiler for C supports arguments -Wsign-compare: YES 
00:01:35.693  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:01:35.693  Compiler for C supports arguments -Wundef: YES 
00:01:35.693  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:35.693  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:01:35.693  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:01:35.693  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:35.693  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:01:35.693  Program objdump found: YES (/usr/bin/objdump)
00:01:35.693  Compiler for C supports arguments -mavx512f: YES 
00:01:35.693  Checking if "AVX512 checking" compiles: YES 
00:01:35.693  Fetching value of define "__SSE4_2__" : 1 
00:01:35.693  Fetching value of define "__AES__" : 1 
00:01:35.693  Fetching value of define "__AVX__" : 1 
00:01:35.693  Fetching value of define "__AVX2__" : 1 
00:01:35.693  Fetching value of define "__AVX512BW__" : 1 
00:01:35.693  Fetching value of define "__AVX512CD__" : 1 
00:01:35.693  Fetching value of define "__AVX512DQ__" : 1 
00:01:35.693  Fetching value of define "__AVX512F__" : 1 
00:01:35.693  Fetching value of define "__AVX512VL__" : 1 
00:01:35.693  Fetching value of define "__PCLMUL__" : 1 
00:01:35.693  Fetching value of define "__RDRND__" : 1 
00:01:35.693  Fetching value of define "__RDSEED__" : 1 
00:01:35.693  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:01:35.693  Fetching value of define "__znver1__" : (undefined) 
00:01:35.693  Fetching value of define "__znver2__" : (undefined) 
00:01:35.693  Fetching value of define "__znver3__" : (undefined) 
00:01:35.693  Fetching value of define "__znver4__" : (undefined) 
00:01:35.693  Compiler for C supports arguments -Wno-format-truncation: YES 
00:01:35.693  Message: lib/log: Defining dependency "log"
00:01:35.693  Message: lib/kvargs: Defining dependency "kvargs"
00:01:35.693  Message: lib/telemetry: Defining dependency "telemetry"
00:01:35.693  Checking for function "getentropy" : NO 
00:01:35.693  Message: lib/eal: Defining dependency "eal"
00:01:35.693  Message: lib/ring: Defining dependency "ring"
00:01:35.693  Message: lib/rcu: Defining dependency "rcu"
00:01:35.693  Message: lib/mempool: Defining dependency "mempool"
00:01:35.693  Message: lib/mbuf: Defining dependency "mbuf"
00:01:35.693  Fetching value of define "__PCLMUL__" : 1 (cached)
00:01:35.693  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:35.693  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:35.693  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:01:35.693  Fetching value of define "__AVX512VL__" : 1 (cached)
00:01:35.693  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:01:35.693  Compiler for C supports arguments -mpclmul: YES 
00:01:35.693  Compiler for C supports arguments -maes: YES 
00:01:35.693  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:35.693  Compiler for C supports arguments -mavx512bw: YES 
00:01:35.694  Compiler for C supports arguments -mavx512dq: YES 
00:01:35.694  Compiler for C supports arguments -mavx512vl: YES 
00:01:35.694  Compiler for C supports arguments -mvpclmulqdq: YES 
00:01:35.694  Compiler for C supports arguments -mavx2: YES 
00:01:35.694  Compiler for C supports arguments -mavx: YES 
00:01:35.694  Message: lib/net: Defining dependency "net"
00:01:35.694  Message: lib/meter: Defining dependency "meter"
00:01:35.694  Message: lib/ethdev: Defining dependency "ethdev"
00:01:35.694  Message: lib/pci: Defining dependency "pci"
00:01:35.694  Message: lib/cmdline: Defining dependency "cmdline"
00:01:35.694  Message: lib/metrics: Defining dependency "metrics"
00:01:35.694  Message: lib/hash: Defining dependency "hash"
00:01:35.694  Message: lib/timer: Defining dependency "timer"
00:01:35.694  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512VL__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512CD__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:35.694  Message: lib/acl: Defining dependency "acl"
00:01:35.694  Message: lib/bbdev: Defining dependency "bbdev"
00:01:35.694  Message: lib/bitratestats: Defining dependency "bitratestats"
00:01:35.694  Run-time dependency libelf found: YES 0.191
00:01:35.694  Message: lib/bpf: Defining dependency "bpf"
00:01:35.694  Message: lib/cfgfile: Defining dependency "cfgfile"
00:01:35.694  Message: lib/compressdev: Defining dependency "compressdev"
00:01:35.694  Message: lib/cryptodev: Defining dependency "cryptodev"
00:01:35.694  Message: lib/distributor: Defining dependency "distributor"
00:01:35.694  Message: lib/dmadev: Defining dependency "dmadev"
00:01:35.694  Message: lib/efd: Defining dependency "efd"
00:01:35.694  Message: lib/eventdev: Defining dependency "eventdev"
00:01:35.694  Message: lib/dispatcher: Defining dependency "dispatcher"
00:01:35.694  Message: lib/gpudev: Defining dependency "gpudev"
00:01:35.694  Message: lib/gro: Defining dependency "gro"
00:01:35.694  Message: lib/gso: Defining dependency "gso"
00:01:35.694  Message: lib/ip_frag: Defining dependency "ip_frag"
00:01:35.694  Message: lib/jobstats: Defining dependency "jobstats"
00:01:35.694  Message: lib/latencystats: Defining dependency "latencystats"
00:01:35.694  Message: lib/lpm: Defining dependency "lpm"
00:01:35.694  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:01:35.694  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:01:35.694  Message: lib/member: Defining dependency "member"
00:01:35.694  Message: lib/pcapng: Defining dependency "pcapng"
00:01:35.694  Compiler for C supports arguments -Wno-cast-qual: YES 
00:01:35.694  Message: lib/power: Defining dependency "power"
00:01:35.694  Message: lib/rawdev: Defining dependency "rawdev"
00:01:35.694  Message: lib/regexdev: Defining dependency "regexdev"
00:01:35.694  Message: lib/mldev: Defining dependency "mldev"
00:01:35.694  Message: lib/rib: Defining dependency "rib"
00:01:35.694  Message: lib/reorder: Defining dependency "reorder"
00:01:35.694  Message: lib/sched: Defining dependency "sched"
00:01:35.694  Message: lib/security: Defining dependency "security"
00:01:35.694  Message: lib/stack: Defining dependency "stack"
00:01:35.694  Has header "linux/userfaultfd.h" : YES 
00:01:35.694  Has header "linux/vduse.h" : YES 
00:01:35.694  Message: lib/vhost: Defining dependency "vhost"
00:01:35.694  Message: lib/ipsec: Defining dependency "ipsec"
00:01:35.694  Message: lib/pdcp: Defining dependency "pdcp"
00:01:35.694  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:01:35.694  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:35.694  Message: lib/fib: Defining dependency "fib"
00:01:35.694  Message: lib/port: Defining dependency "port"
00:01:35.694  Message: lib/pdump: Defining dependency "pdump"
00:01:35.694  Message: lib/table: Defining dependency "table"
00:01:35.694  Message: lib/pipeline: Defining dependency "pipeline"
00:01:35.694  Message: lib/graph: Defining dependency "graph"
00:01:35.694  Message: lib/node: Defining dependency "node"
00:01:35.694  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:01:37.601  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:01:37.601  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:01:37.601  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:01:37.601  Compiler for C supports arguments -Wno-sign-compare: YES 
00:01:37.601  Compiler for C supports arguments -Wno-unused-value: YES 
00:01:37.601  Compiler for C supports arguments -Wno-format: YES 
00:01:37.601  Compiler for C supports arguments -Wno-format-security: YES 
00:01:37.601  Compiler for C supports arguments -Wno-format-nonliteral: YES 
00:01:37.601  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:01:37.601  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:01:37.601  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:01:37.601  Fetching value of define "__AVX512F__" : 1 (cached)
00:01:37.601  Fetching value of define "__AVX512BW__" : 1 (cached)
00:01:37.601  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:37.601  Compiler for C supports arguments -mavx512bw: YES (cached)
00:01:37.601  Compiler for C supports arguments -march=skylake-avx512: YES 
00:01:37.601  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:01:37.601  Has header "sys/epoll.h" : YES 
00:01:37.601  Program doxygen found: YES (/usr/local/bin/doxygen)
00:01:37.601  Configuring doxy-api-html.conf using configuration
00:01:37.601  Configuring doxy-api-man.conf using configuration
00:01:37.601  Program mandb found: YES (/usr/bin/mandb)
00:01:37.601  Program sphinx-build found: NO
00:01:37.601  Configuring rte_build_config.h using configuration
00:01:37.601  Message: 
00:01:37.601  =================
00:01:37.601  Applications Enabled
00:01:37.601  =================
00:01:37.601  
00:01:37.601  apps:
00:01:37.601  	dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 
00:01:37.601  	test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 
00:01:37.601  	test-pmd, test-regex, test-sad, test-security-perf, 
00:01:37.601  
00:01:37.601  Message: 
00:01:37.601  =================
00:01:37.601  Libraries Enabled
00:01:37.601  =================
00:01:37.601  
00:01:37.601  libs:
00:01:37.601  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:01:37.601  	net, meter, ethdev, pci, cmdline, metrics, hash, timer, 
00:01:37.601  	acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 
00:01:37.601  	dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 
00:01:37.601  	jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 
00:01:37.601  	mldev, rib, reorder, sched, security, stack, vhost, ipsec, 
00:01:37.601  	pdcp, fib, port, pdump, table, pipeline, graph, node, 
00:01:37.601  	
00:01:37.601  
00:01:37.601  Message: 
00:01:37.601  ===============
00:01:37.601  Drivers Enabled
00:01:37.601  ===============
00:01:37.601  
00:01:37.601  common:
00:01:37.601  	
00:01:37.601  bus:
00:01:37.601  	pci, vdev, 
00:01:37.601  mempool:
00:01:37.601  	ring, 
00:01:37.601  dma:
00:01:37.601  	
00:01:37.601  net:
00:01:37.601  	i40e, 
00:01:37.601  raw:
00:01:37.601  	
00:01:37.601  crypto:
00:01:37.601  	
00:01:37.601  compress:
00:01:37.601  	
00:01:37.601  regex:
00:01:37.601  	
00:01:37.601  ml:
00:01:37.601  	
00:01:37.601  vdpa:
00:01:37.601  	
00:01:37.601  event:
00:01:37.601  	
00:01:37.601  baseband:
00:01:37.601  	
00:01:37.601  gpu:
00:01:37.601  	
00:01:37.601  
00:01:37.601  Message: 
00:01:37.601  =================
00:01:37.601  Content Skipped
00:01:37.601  =================
00:01:37.601  
00:01:37.601  apps:
00:01:37.601  	
00:01:37.601  libs:
00:01:37.601  	
00:01:37.601  drivers:
00:01:37.601  	common/cpt:	not in enabled drivers build config
00:01:37.601  	common/dpaax:	not in enabled drivers build config
00:01:37.601  	common/iavf:	not in enabled drivers build config
00:01:37.601  	common/idpf:	not in enabled drivers build config
00:01:37.601  	common/mvep:	not in enabled drivers build config
00:01:37.601  	common/octeontx:	not in enabled drivers build config
00:01:37.601  	bus/auxiliary:	not in enabled drivers build config
00:01:37.601  	bus/cdx:	not in enabled drivers build config
00:01:37.601  	bus/dpaa:	not in enabled drivers build config
00:01:37.601  	bus/fslmc:	not in enabled drivers build config
00:01:37.601  	bus/ifpga:	not in enabled drivers build config
00:01:37.601  	bus/platform:	not in enabled drivers build config
00:01:37.601  	bus/vmbus:	not in enabled drivers build config
00:01:37.601  	common/cnxk:	not in enabled drivers build config
00:01:37.601  	common/mlx5:	not in enabled drivers build config
00:01:37.601  	common/nfp:	not in enabled drivers build config
00:01:37.601  	common/qat:	not in enabled drivers build config
00:01:37.601  	common/sfc_efx:	not in enabled drivers build config
00:01:37.601  	mempool/bucket:	not in enabled drivers build config
00:01:37.601  	mempool/cnxk:	not in enabled drivers build config
00:01:37.601  	mempool/dpaa:	not in enabled drivers build config
00:01:37.601  	mempool/dpaa2:	not in enabled drivers build config
00:01:37.602  	mempool/octeontx:	not in enabled drivers build config
00:01:37.602  	mempool/stack:	not in enabled drivers build config
00:01:37.602  	dma/cnxk:	not in enabled drivers build config
00:01:37.602  	dma/dpaa:	not in enabled drivers build config
00:01:37.602  	dma/dpaa2:	not in enabled drivers build config
00:01:37.602  	dma/hisilicon:	not in enabled drivers build config
00:01:37.602  	dma/idxd:	not in enabled drivers build config
00:01:37.602  	dma/ioat:	not in enabled drivers build config
00:01:37.602  	dma/skeleton:	not in enabled drivers build config
00:01:37.602  	net/af_packet:	not in enabled drivers build config
00:01:37.602  	net/af_xdp:	not in enabled drivers build config
00:01:37.602  	net/ark:	not in enabled drivers build config
00:01:37.602  	net/atlantic:	not in enabled drivers build config
00:01:37.602  	net/avp:	not in enabled drivers build config
00:01:37.602  	net/axgbe:	not in enabled drivers build config
00:01:37.602  	net/bnx2x:	not in enabled drivers build config
00:01:37.602  	net/bnxt:	not in enabled drivers build config
00:01:37.602  	net/bonding:	not in enabled drivers build config
00:01:37.602  	net/cnxk:	not in enabled drivers build config
00:01:37.602  	net/cpfl:	not in enabled drivers build config
00:01:37.602  	net/cxgbe:	not in enabled drivers build config
00:01:37.602  	net/dpaa:	not in enabled drivers build config
00:01:37.602  	net/dpaa2:	not in enabled drivers build config
00:01:37.602  	net/e1000:	not in enabled drivers build config
00:01:37.602  	net/ena:	not in enabled drivers build config
00:01:37.602  	net/enetc:	not in enabled drivers build config
00:01:37.602  	net/enetfec:	not in enabled drivers build config
00:01:37.602  	net/enic:	not in enabled drivers build config
00:01:37.602  	net/failsafe:	not in enabled drivers build config
00:01:37.602  	net/fm10k:	not in enabled drivers build config
00:01:37.602  	net/gve:	not in enabled drivers build config
00:01:37.602  	net/hinic:	not in enabled drivers build config
00:01:37.602  	net/hns3:	not in enabled drivers build config
00:01:37.602  	net/iavf:	not in enabled drivers build config
00:01:37.602  	net/ice:	not in enabled drivers build config
00:01:37.602  	net/idpf:	not in enabled drivers build config
00:01:37.602  	net/igc:	not in enabled drivers build config
00:01:37.602  	net/ionic:	not in enabled drivers build config
00:01:37.602  	net/ipn3ke:	not in enabled drivers build config
00:01:37.602  	net/ixgbe:	not in enabled drivers build config
00:01:37.602  	net/mana:	not in enabled drivers build config
00:01:37.602  	net/memif:	not in enabled drivers build config
00:01:37.602  	net/mlx4:	not in enabled drivers build config
00:01:37.602  	net/mlx5:	not in enabled drivers build config
00:01:37.602  	net/mvneta:	not in enabled drivers build config
00:01:37.602  	net/mvpp2:	not in enabled drivers build config
00:01:37.602  	net/netvsc:	not in enabled drivers build config
00:01:37.602  	net/nfb:	not in enabled drivers build config
00:01:37.602  	net/nfp:	not in enabled drivers build config
00:01:37.602  	net/ngbe:	not in enabled drivers build config
00:01:37.602  	net/null:	not in enabled drivers build config
00:01:37.602  	net/octeontx:	not in enabled drivers build config
00:01:37.602  	net/octeon_ep:	not in enabled drivers build config
00:01:37.602  	net/pcap:	not in enabled drivers build config
00:01:37.602  	net/pfe:	not in enabled drivers build config
00:01:37.602  	net/qede:	not in enabled drivers build config
00:01:37.602  	net/ring:	not in enabled drivers build config
00:01:37.602  	net/sfc:	not in enabled drivers build config
00:01:37.602  	net/softnic:	not in enabled drivers build config
00:01:37.602  	net/tap:	not in enabled drivers build config
00:01:37.602  	net/thunderx:	not in enabled drivers build config
00:01:37.602  	net/txgbe:	not in enabled drivers build config
00:01:37.602  	net/vdev_netvsc:	not in enabled drivers build config
00:01:37.602  	net/vhost:	not in enabled drivers build config
00:01:37.602  	net/virtio:	not in enabled drivers build config
00:01:37.602  	net/vmxnet3:	not in enabled drivers build config
00:01:37.602  	raw/cnxk_bphy:	not in enabled drivers build config
00:01:37.602  	raw/cnxk_gpio:	not in enabled drivers build config
00:01:37.602  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:01:37.602  	raw/ifpga:	not in enabled drivers build config
00:01:37.602  	raw/ntb:	not in enabled drivers build config
00:01:37.602  	raw/skeleton:	not in enabled drivers build config
00:01:37.602  	crypto/armv8:	not in enabled drivers build config
00:01:37.602  	crypto/bcmfs:	not in enabled drivers build config
00:01:37.602  	crypto/caam_jr:	not in enabled drivers build config
00:01:37.602  	crypto/ccp:	not in enabled drivers build config
00:01:37.602  	crypto/cnxk:	not in enabled drivers build config
00:01:37.602  	crypto/dpaa_sec:	not in enabled drivers build config
00:01:37.602  	crypto/dpaa2_sec:	not in enabled drivers build config
00:01:37.602  	crypto/ipsec_mb:	not in enabled drivers build config
00:01:37.602  	crypto/mlx5:	not in enabled drivers build config
00:01:37.602  	crypto/mvsam:	not in enabled drivers build config
00:01:37.602  	crypto/nitrox:	not in enabled drivers build config
00:01:37.602  	crypto/null:	not in enabled drivers build config
00:01:37.602  	crypto/octeontx:	not in enabled drivers build config
00:01:37.602  	crypto/openssl:	not in enabled drivers build config
00:01:37.602  	crypto/scheduler:	not in enabled drivers build config
00:01:37.602  	crypto/uadk:	not in enabled drivers build config
00:01:37.602  	crypto/virtio:	not in enabled drivers build config
00:01:37.602  	compress/isal:	not in enabled drivers build config
00:01:37.602  	compress/mlx5:	not in enabled drivers build config
00:01:37.602  	compress/octeontx:	not in enabled drivers build config
00:01:37.602  	compress/zlib:	not in enabled drivers build config
00:01:37.602  	regex/mlx5:	not in enabled drivers build config
00:01:37.602  	regex/cn9k:	not in enabled drivers build config
00:01:37.602  	ml/cnxk:	not in enabled drivers build config
00:01:37.602  	vdpa/ifc:	not in enabled drivers build config
00:01:37.602  	vdpa/mlx5:	not in enabled drivers build config
00:01:37.602  	vdpa/nfp:	not in enabled drivers build config
00:01:37.602  	vdpa/sfc:	not in enabled drivers build config
00:01:37.602  	event/cnxk:	not in enabled drivers build config
00:01:37.602  	event/dlb2:	not in enabled drivers build config
00:01:37.602  	event/dpaa:	not in enabled drivers build config
00:01:37.602  	event/dpaa2:	not in enabled drivers build config
00:01:37.602  	event/dsw:	not in enabled drivers build config
00:01:37.602  	event/opdl:	not in enabled drivers build config
00:01:37.602  	event/skeleton:	not in enabled drivers build config
00:01:37.602  	event/sw:	not in enabled drivers build config
00:01:37.602  	event/octeontx:	not in enabled drivers build config
00:01:37.602  	baseband/acc:	not in enabled drivers build config
00:01:37.602  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:01:37.602  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:01:37.602  	baseband/la12xx:	not in enabled drivers build config
00:01:37.602  	baseband/null:	not in enabled drivers build config
00:01:37.602  	baseband/turbo_sw:	not in enabled drivers build config
00:01:37.602  	gpu/cuda:	not in enabled drivers build config
00:01:37.602  	
00:01:37.602  
00:01:37.602  Build targets in project: 217
00:01:37.602  
00:01:37.602  DPDK 23.11.0
00:01:37.602  
00:01:37.602    User defined options
00:01:37.602      libdir        : lib
00:01:37.602      prefix        : /var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:01:37.602      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow
00:01:37.602      c_link_args   : 
00:01:37.602      enable_docs   : false
00:01:37.602      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:01:37.602      enable_kmods  : false
00:01:37.602      machine       : native
00:01:37.602      tests         : false
00:01:37.602  
00:01:37.602  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:01:37.602  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:01:37.602   00:32:26	-- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp -j72
00:01:37.865  ninja: Entering directory `/var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp'
00:01:37.865  [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:01:37.865  [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:01:37.865  [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:01:37.865  [4/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:01:37.865  [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:01:37.865  [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:01:37.865  [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:01:37.866  [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:01:37.866  [9/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:01:37.866  [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:01:37.866  [11/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:01:37.866  [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:01:37.866  [13/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:01:38.126  [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:01:38.126  [15/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:01:38.126  [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:01:38.126  [17/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:01:38.126  [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:01:38.126  [19/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:01:38.126  [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:01:38.126  [21/707] Linking static target lib/librte_kvargs.a
00:01:38.126  [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:01:38.126  [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:01:38.126  [24/707] Compiling C object lib/librte_log.a.p/log_log.c.o
00:01:38.126  [25/707] Linking static target lib/librte_log.a
00:01:38.385  [26/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:01:38.385  [27/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:01:38.385  [28/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:01:38.646  [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:01:38.646  [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:01:38.646  [31/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:01:38.646  [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:01:38.646  [33/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:01:38.646  [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:01:38.646  [35/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:01:38.646  [36/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:01:38.646  [37/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:01:38.646  [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:01:38.646  [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:01:38.646  [40/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:01:38.646  [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:01:38.646  [42/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:01:38.646  [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:01:38.646  [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:01:38.646  [45/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:01:38.646  [46/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:01:38.646  [47/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:01:38.646  [48/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:01:38.646  [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:01:38.646  [50/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:01:38.646  [51/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:01:38.646  [52/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:01:38.646  [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:01:38.646  [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:01:38.646  [55/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:01:38.646  [56/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:01:38.646  [57/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:01:38.646  [58/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:01:38.906  [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:01:38.906  [60/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:01:38.906  [61/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:01:38.906  [62/707] Linking static target lib/librte_ring.a
00:01:38.906  [63/707] Linking static target lib/librte_pci.a
00:01:38.906  [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:01:38.906  [65/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:01:38.906  [66/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:01:38.906  [67/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:01:38.906  [68/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:01:38.906  [69/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:01:38.906  [70/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:01:38.906  [71/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:01:38.906  [72/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:01:38.906  [73/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:01:38.906  [74/707] Linking static target lib/librte_meter.a
00:01:38.906  [75/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:01:38.906  [76/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:01:38.906  [77/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:01:38.906  [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:01:38.906  [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:01:38.906  [80/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:01:38.906  [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:01:38.906  [82/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:01:38.906  [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:01:38.906  [84/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:01:38.906  [85/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:01:38.906  [86/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:01:38.906  [87/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:01:38.906  [88/707] Linking static target lib/net/libnet_crc_avx512_lib.a
00:01:38.906  [89/707] Linking target lib/librte_log.so.24.0
00:01:38.906  [90/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:01:38.906  [91/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:01:39.170  [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:01:39.170  [93/707] Linking static target lib/librte_net.a
00:01:39.170  [94/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:01:39.170  [95/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:01:39.170  [96/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:01:39.170  [97/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:39.170  [98/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:01:39.170  [99/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:01:39.170  [100/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:01:39.170  [101/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:01:39.170  [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:01:39.170  [103/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:01:39.170  [104/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:01:39.170  [105/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:01:39.170  [106/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:01:39.170  [107/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:01:39.170  [108/707] Linking target lib/librte_kvargs.so.24.0
00:01:39.436  [109/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:01:39.436  [110/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:01:39.436  [111/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:01:39.436  [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:01:39.436  [113/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:01:39.436  [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:01:39.436  [115/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:01:39.436  [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:01:39.436  [117/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:01:39.436  [118/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:01:39.436  [119/707] Linking static target lib/librte_cfgfile.a
00:01:39.436  [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:01:39.436  [121/707] Linking static target lib/librte_cmdline.a
00:01:39.436  [122/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:01:39.436  [123/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:01:39.436  [124/707] Linking static target lib/librte_mempool.a
00:01:39.436  [125/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:01:39.436  [126/707] Linking static target lib/librte_bitratestats.a
00:01:39.436  [127/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:01:39.436  [128/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:01:39.436  [129/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:01:39.436  [130/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:01:39.696  [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:01:39.696  [132/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:01:39.696  [133/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:01:39.696  [134/707] Linking static target lib/librte_metrics.a
00:01:39.696  [135/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:01:39.696  [136/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:01:39.696  [137/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:01:39.696  [138/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:01:39.696  [139/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:01:39.696  [140/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:01:39.696  [141/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:01:39.696  [142/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:01:39.696  [143/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:01:39.964  [144/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:01:39.964  [145/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:01:39.964  [146/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:01:39.964  [147/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:01:39.964  [148/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:01:39.964  [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:01:39.964  [150/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:01:39.964  [151/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:01:39.964  [152/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o
00:01:39.964  [153/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:01:39.964  [154/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:01:39.964  [155/707] Linking static target lib/librte_eal.a
00:01:39.964  [156/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:01:39.964  [157/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:01:39.964  [158/707] Linking static target lib/librte_compressdev.a
00:01:39.964  [159/707] Linking static target lib/librte_telemetry.a
00:01:39.964  [160/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:01:39.964  [161/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:01:39.964  [162/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:01:39.964  [163/707] Linking static target lib/librte_rcu.a
00:01:39.964  [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:01:40.225  [165/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:01:40.225  [166/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:01:40.225  [167/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:01:40.225  [168/707] Linking static target lib/librte_timer.a
00:01:40.225  [169/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:01:40.225  [170/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:01:40.225  [171/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:01:40.225  [172/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:01:40.225  [173/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:01:40.225  [174/707] Linking static target lib/librte_bbdev.a
00:01:40.225  [175/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:01:40.225  [176/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:01:40.225  [177/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:01:40.225  [178/707] Linking static target lib/librte_distributor.a
00:01:40.225  [179/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:01:40.225  [180/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:01:40.225  [181/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:01:40.488  [182/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:01:40.488  [183/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o
00:01:40.488  [184/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:01:40.488  [185/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:01:40.488  [186/707] Linking static target lib/librte_mbuf.a
00:01:40.488  [187/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:01:40.488  [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:01:40.488  [189/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:01:40.488  [190/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:01:40.488  [191/707] Linking static target lib/librte_jobstats.a
00:01:40.488  [192/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o
00:01:40.488  [193/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:01:40.488  [194/707] Linking static target lib/librte_dispatcher.a
00:01:40.488  [195/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:01:40.488  [196/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:01:40.488  [197/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:01:40.488  [198/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:01:40.488  [199/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:01:40.488  [200/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:01:40.751  [201/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:01:40.751  [202/707] Linking static target lib/librte_dmadev.a
00:01:40.751  [203/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:01:40.751  [204/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:01:40.751  [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:01:40.751  [206/707] Linking static target lib/librte_gpudev.a
00:01:40.751  [207/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:01:40.751  [208/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:01:40.751  [209/707] Linking static target lib/member/libsketch_avx512_tmp.a
00:01:40.751  [210/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:01:40.751  [211/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:01:40.751  [212/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:01:40.751  [213/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:01:40.751  [214/707] Linking static target lib/librte_gro.a
00:01:40.751  [215/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:01:40.751  [216/707] Linking target lib/librte_telemetry.so.24.0
00:01:40.751  [217/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:01:40.751  [218/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:01:40.751  [219/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:01:40.752  [220/707] Linking static target lib/librte_gso.a
00:01:40.752  [221/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:01:41.016  [222/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.016  [223/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.016  [224/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:01:41.016  [225/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:01:41.016  [226/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:01:41.016  [227/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:01:41.016  [228/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:01:41.016  [229/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:01:41.016  [230/707] Linking static target lib/librte_latencystats.a
00:01:41.016  [231/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:01:41.016  [232/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.016  [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o
00:01:41.016  [234/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:01:41.016  [235/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:01:41.016  [236/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:01:41.016  [237/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:01:41.016  [238/707] Linking static target lib/librte_bpf.a
00:01:41.016  [239/707] Linking static target lib/librte_ip_frag.a
00:01:41.016  [240/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o
00:01:41.016  [241/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:01:41.016  [242/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o
00:01:41.016  [243/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:01:41.016  [244/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:01:41.277  [245/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:01:41.277  [246/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.277  [247/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o
00:01:41.277  [248/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.277  [249/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o
00:01:41.277  [250/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:01:41.277  [251/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:01:41.277  [252/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:01:41.277  [253/707] Linking static target lib/librte_stack.a
00:01:41.277  [254/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.277  [255/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.277  [256/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:01:41.277  [257/707] Linking static target lib/librte_regexdev.a
00:01:41.277  [258/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.277  [259/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:01:41.277  [260/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.277  [261/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:01:41.538  [262/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.538  [263/707] Linking static target lib/librte_pcapng.a
00:01:41.538  [264/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o
00:01:41.538  [265/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:01:41.538  [266/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:01:41.538  [267/707] Linking static target lib/librte_mldev.a
00:01:41.538  [268/707] Linking static target lib/librte_power.a
00:01:41.538  [269/707] Linking static target lib/librte_rawdev.a
00:01:41.538  [270/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:01:41.538  [271/707] Linking static target lib/librte_security.a
00:01:41.538  [272/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.539  [273/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.539  [274/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:01:41.539  [275/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.539  [276/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:01:41.800  [277/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:01:41.800  [278/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:01:41.800  [279/707] Linking static target lib/librte_reorder.a
00:01:41.800  [280/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:01:41.800  [281/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:01:41.800  [282/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:01:41.800  [283/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:01:41.800  [284/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:01:41.800  [285/707] Linking static target lib/librte_efd.a
00:01:41.800  [286/707] Linking static target lib/librte_rib.a
00:01:41.800  [287/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:01:41.800  [288/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o
00:01:41.800  [289/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:01:41.800  [290/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o
00:01:41.800  [291/707] Linking static target lib/librte_lpm.a
00:01:41.800  [292/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o
00:01:41.800  [293/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:01:41.800  [294/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:01:41.800  [295/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:01:41.800  [296/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:01:41.800  [297/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:01:41.800  [298/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:01:42.062  [299/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:01:42.062  [300/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:01:42.062  [301/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:01:42.062  [302/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o
00:01:42.062  [303/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:01:42.062  [304/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:01:42.062  [305/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:01:42.062  [306/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:01:42.062  [307/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o
00:01:42.062  [308/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o
00:01:42.328  [309/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:01:42.328  [310/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.328  [311/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.328  [312/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.328  [313/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o
00:01:42.328  [314/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.328  [315/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.328  [316/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:01:42.328  [317/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:01:42.590  [318/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.590  [319/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:01:42.590  [320/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:01:42.590  [321/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:01:42.590  [322/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:01:42.590  [323/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.590  [324/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:01:42.590  [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:01:42.590  [326/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:01:42.590  [327/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:01:42.590  [328/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.590  [329/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:01:42.590  [330/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:01:42.590  [331/707] Compiling C object lib/librte_node.a.p/node_null.c.o
00:01:42.590  [332/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:01:42.590  [333/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:42.590  [334/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:01:42.590  [335/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:01:42.590  [336/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:01:42.590  [337/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:01:42.850  [338/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:01:42.850  [339/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:01:42.850  [340/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:01:42.850  [341/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:01:42.850  [342/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:01:42.850  [343/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:01:42.850  [344/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:01:42.850  [345/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:01:42.850  [346/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o
00:01:42.850  [347/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:01:42.850  [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:01:43.113  [349/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:01:43.113  [350/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:01:43.113  [351/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:01:43.113  [352/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:01:43.113  [353/707] Linking static target lib/librte_sched.a
00:01:43.113  [354/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:01:43.113  [355/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:01:43.113  [356/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:01:43.113  [357/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:01:43.113  [358/707] Linking static target lib/librte_cryptodev.a
00:01:43.113  [359/707] Linking static target lib/librte_fib.a
00:01:43.113  [360/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o
00:01:43.113  [361/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:01:43.113  [362/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:01:43.375  [363/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:01:43.375  [364/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:01:43.375  [365/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:01:43.375  [366/707] Compiling C object lib/librte_node.a.p/node_log.c.o
00:01:43.375  [367/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:01:43.375  [368/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:01:43.375  [369/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:01:43.375  [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:01:43.639  [371/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:01:43.639  [372/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o
00:01:43.639  [373/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o
00:01:43.639  [374/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:01:43.639  [375/707] Linking static target lib/librte_graph.a
00:01:43.639  [376/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:01:43.639  [377/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:01:43.639  [378/707] Linking static target lib/librte_pdump.a
00:01:43.639  [379/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o
00:01:43.639  [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:01:43.639  [381/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:01:43.639  [382/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:01:43.639  [383/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:43.639  [384/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:01:43.639  [385/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:01:43.639  [386/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:01:43.639  [387/707] Linking static target drivers/libtmp_rte_bus_vdev.a
00:01:43.640  [388/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o
00:01:43.640  [389/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:01:43.902  [390/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:01:43.902  [391/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:01:43.902  [392/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:01:43.902  [393/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:01:43.902  [394/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o
00:01:43.902  [395/707] Linking static target lib/librte_member.a
00:01:43.902  [396/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:01:43.902  [397/707] Linking static target drivers/libtmp_rte_bus_pci.a
00:01:43.902  [398/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:01:43.902  [399/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:01:43.902  [400/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:01:43.902  [401/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o
00:01:44.169  [402/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o
00:01:44.169  [403/707] Linking static target lib/acl/libavx2_tmp.a
00:01:44.169  [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:01:44.169  [405/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:01:44.169  [406/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:01:44.169  [407/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:01:44.169  [408/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:01:44.169  [409/707] Linking static target lib/librte_ipsec.a
00:01:44.169  [410/707] Linking static target lib/librte_hash.a
00:01:44.169  [411/707] Linking static target lib/librte_table.a
00:01:44.169  [412/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:01:44.169  [413/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:01:44.169  [414/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o
00:01:44.169  [415/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:44.169  [416/707] Linking static target drivers/librte_bus_vdev.a
00:01:44.169  [417/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:01:44.169  [418/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o
00:01:44.169  [419/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:01:44.169  [420/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:01:44.169  [421/707] Compiling C object app/dpdk-graph.p/graph_main.c.o
00:01:44.169  [422/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o
00:01:44.169  [423/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o
00:01:44.170  [424/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:01:44.431  [425/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o
00:01:44.431  [426/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o
00:01:44.431  [427/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:01:44.431  [428/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:01:44.431  [429/707] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:01:44.431  [430/707] Linking static target drivers/libtmp_rte_mempool_ring.a
00:01:44.431  [431/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:01:44.431  [432/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:44.431  [433/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:01:44.431  [434/707] Linking static target drivers/librte_bus_pci.a
00:01:44.431  [435/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o
00:01:44.431  [436/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o
00:01:44.431  [437/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:01:44.431  [438/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o
00:01:44.431  [439/707] Linking static target lib/librte_eventdev.a
00:01:44.431  [440/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:01:44.431  [441/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:01:44.431  [442/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o
00:01:44.431  [443/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o
00:01:44.431  [444/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:01:44.431  [445/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o
00:01:44.692  [446/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o
00:01:44.692  [447/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:44.692  [448/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o
00:01:44.692  [449/707] Linking static target lib/librte_pdcp.a
00:01:44.692  [450/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:01:44.692  [451/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o
00:01:44.692  [452/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:01:44.692  [453/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o
00:01:44.692  [454/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:01:44.692  [455/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:01:44.692  [456/707] Linking static target lib/librte_acl.a
00:01:44.692  [457/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o
00:01:44.692  [458/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:01:44.692  [459/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:01:44.692  [460/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:01:44.954  [461/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:01:44.954  [462/707] Linking static target drivers/librte_mempool_ring.a
00:01:44.954  [463/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:01:44.954  [464/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o
00:01:44.954  [465/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:01:44.954  [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o
00:01:44.954  [467/707] Linking static target lib/librte_node.a
00:01:44.954  [468/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:01:44.954  [469/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:01:44.954  [470/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:01:44.954  [471/707] Linking static target lib/librte_port.a
00:01:44.954  [472/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:01:44.954  [473/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:01:44.954  [474/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.217  [475/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:01:45.217  [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:01:45.217  [477/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.217  [478/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o
00:01:45.217  [479/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.217  [480/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:01:45.217  [481/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.217  [482/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.217  [483/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:01:45.217  [484/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.479  [485/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:01:45.479  [486/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:01:45.479  [487/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o
00:01:45.479  [488/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:01:45.479  [489/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:01:45.479  [490/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o
00:01:45.479  [491/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:01:45.479  [492/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:01:45.479  [493/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:01:45.479  [494/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:01:45.479  [495/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:01:45.479  [496/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:01:45.479  [497/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:01:45.742  [498/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:01:45.742  [499/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:01:45.742  [500/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:01:45.742  [501/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:01:45.742  [502/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:01:45.742  [503/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:01:45.742  [504/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:01:45.742  [505/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:01:45.742  [506/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:01:45.742  [507/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o
00:01:45.742  [508/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o
00:01:45.742  [509/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o
00:01:45.742  [510/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:01:45.742  [511/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o
00:01:45.742  [512/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o
00:01:45.742  [513/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o
00:01:46.000  [514/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o
00:01:46.000  [515/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o
00:01:46.000  [516/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:01:46.000  [517/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:01:46.000  [518/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:01:46.000  [519/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o
00:01:46.000  [520/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:01:46.000  [521/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:01:46.000  [522/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:01:46.000  [523/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:01:46.000  [524/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:01:46.000  [525/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:01:46.258  [526/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:01:46.258  [527/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:01:46.258  [528/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:01:46.258  [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o
00:01:46.258  [530/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:01:46.258  [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:01:46.258  [532/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:01:46.258  [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:01:46.258  [534/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:01:46.258  [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:01:46.258  [536/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:01:46.516  [537/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:01:46.516  [538/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o
00:01:46.516  [539/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a
00:01:46.516  [540/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:01:46.516  [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:01:46.516  [542/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:01:46.516  [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:01:46.516  [544/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:01:46.516  [545/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:01:46.516  [546/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:01:46.516  [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:01:46.516  [548/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:01:46.516  [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:01:46.516  [550/707] Linking static target drivers/net/i40e/base/libi40e_base.a
00:01:46.516  [551/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:01:46.516  [552/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o
00:01:46.516  [553/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:01:46.516  [554/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:01:46.774  [555/707] Linking static target lib/librte_ethdev.a
00:01:46.774  [556/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:01:46.774  [557/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:01:46.774  [558/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:01:46.774  [559/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:01:46.774  [560/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:01:46.774  [561/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:01:46.774  [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:01:46.774  [563/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:01:46.774  [564/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:01:46.774  [565/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:01:46.774  [566/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:01:46.774  [567/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:01:47.033  [568/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:01:47.033  [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:01:47.033  [570/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:01:47.290  [571/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o
00:01:47.548  [572/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:01:47.548  [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:01:47.548  [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:01:48.114  [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:01:48.114  [576/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:48.373  [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:01:48.373  [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:01:48.631  [579/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:01:48.631  [580/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:01:49.198  [581/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:01:49.457  [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:01:49.457  [583/707] Linking static target drivers/libtmp_rte_net_i40e.a
00:01:49.717  [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:01:49.976  [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:01:49.976  [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:01:49.976  [587/707] Linking static target drivers/librte_net_i40e.a
00:01:49.976  [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:01:49.976  [589/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:01:51.351  [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:01:51.351  [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:01:52.288  [592/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:01:52.288  [593/707] Linking target lib/librte_eal.so.24.0
00:01:52.288  [594/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:01:52.547  [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:01:52.547  [596/707] Linking target lib/librte_ring.so.24.0
00:01:52.547  [597/707] Linking target lib/librte_meter.so.24.0
00:01:52.547  [598/707] Linking target lib/librte_cfgfile.so.24.0
00:01:52.547  [599/707] Linking target drivers/librte_bus_vdev.so.24.0
00:01:52.547  [600/707] Linking target lib/librte_timer.so.24.0
00:01:52.547  [601/707] Linking target lib/librte_jobstats.so.24.0
00:01:52.547  [602/707] Linking target lib/librte_pci.so.24.0
00:01:52.547  [603/707] Linking target lib/librte_dmadev.so.24.0
00:01:52.547  [604/707] Linking target lib/librte_rawdev.so.24.0
00:01:52.547  [605/707] Linking target lib/librte_stack.so.24.0
00:01:52.547  [606/707] Linking target lib/librte_acl.so.24.0
00:01:52.547  [607/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:01:52.547  [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:01:52.547  [609/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:01:52.547  [610/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:01:52.547  [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols
00:01:52.547  [612/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:01:52.806  [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols
00:01:52.806  [614/707] Linking target lib/librte_rcu.so.24.0
00:01:52.806  [615/707] Linking target drivers/librte_bus_pci.so.24.0
00:01:52.806  [616/707] Linking target lib/librte_mempool.so.24.0
00:01:52.806  [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:01:52.806  [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:01:52.806  [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols
00:01:52.806  [620/707] Linking target lib/librte_mbuf.so.24.0
00:01:52.806  [621/707] Linking target drivers/librte_mempool_ring.so.24.0
00:01:53.065  [622/707] Linking target lib/librte_rib.so.24.0
00:01:53.065  [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:01:53.065  [624/707] Linking target lib/librte_compressdev.so.24.0
00:01:53.065  [625/707] Linking target lib/librte_mldev.so.24.0
00:01:53.065  [626/707] Linking target lib/librte_distributor.so.24.0
00:01:53.065  [627/707] Linking target lib/librte_net.so.24.0
00:01:53.065  [628/707] Linking target lib/librte_bbdev.so.24.0
00:01:53.065  [629/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols
00:01:53.065  [630/707] Linking target lib/librte_cryptodev.so.24.0
00:01:53.065  [631/707] Linking target lib/librte_gpudev.so.24.0
00:01:53.065  [632/707] Linking target lib/librte_regexdev.so.24.0
00:01:53.065  [633/707] Linking target lib/librte_reorder.so.24.0
00:01:53.065  [634/707] Linking target lib/librte_sched.so.24.0
00:01:53.065  [635/707] Linking target lib/librte_fib.so.24.0
00:01:53.324  [636/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols
00:01:53.324  [637/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:01:53.324  [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:01:53.324  [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols
00:01:53.324  [640/707] Linking target lib/librte_hash.so.24.0
00:01:53.324  [641/707] Linking target lib/librte_security.so.24.0
00:01:53.324  [642/707] Linking target lib/librte_cmdline.so.24.0
00:01:53.583  [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:01:53.583  [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols
00:01:53.583  [645/707] Linking target lib/librte_lpm.so.24.0
00:01:53.583  [646/707] Linking target lib/librte_efd.so.24.0
00:01:53.583  [647/707] Linking target lib/librte_member.so.24.0
00:01:53.583  [648/707] Linking target lib/librte_pdcp.so.24.0
00:01:53.583  [649/707] Linking target lib/librte_ipsec.so.24.0
00:01:53.583  [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols
00:01:53.841  [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols
00:01:55.747  [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.747  [653/707] Linking target lib/librte_ethdev.so.24.0
00:01:55.747  [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:01:56.016  [655/707] Linking target lib/librte_metrics.so.24.0
00:01:56.016  [656/707] Linking target lib/librte_gso.so.24.0
00:01:56.016  [657/707] Linking target lib/librte_gro.so.24.0
00:01:56.016  [658/707] Linking target lib/librte_ip_frag.so.24.0
00:01:56.016  [659/707] Linking target lib/librte_pcapng.so.24.0
00:01:56.016  [660/707] Linking target lib/librte_bpf.so.24.0
00:01:56.016  [661/707] Linking target lib/librte_power.so.24.0
00:01:56.016  [662/707] Linking target lib/librte_eventdev.so.24.0
00:01:56.016  [663/707] Linking target drivers/librte_net_i40e.so.24.0
00:01:56.016  [664/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols
00:01:56.016  [665/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols
00:01:56.016  [666/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols
00:01:56.016  [667/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols
00:01:56.016  [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols
00:01:56.016  [669/707] Linking target lib/librte_latencystats.so.24.0
00:01:56.016  [670/707] Linking target lib/librte_bitratestats.so.24.0
00:01:56.016  [671/707] Linking target lib/librte_pdump.so.24.0
00:01:56.016  [672/707] Linking target lib/librte_graph.so.24.0
00:01:56.016  [673/707] Linking target lib/librte_dispatcher.so.24.0
00:01:56.016  [674/707] Linking target lib/librte_port.so.24.0
00:01:56.276  [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols
00:01:56.276  [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols
00:01:56.276  [677/707] Linking target lib/librte_node.so.24.0
00:01:56.276  [678/707] Linking target lib/librte_table.so.24.0
00:01:56.535  [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols
00:02:00.729  [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:02:00.729  [681/707] Linking static target lib/librte_pipeline.a
00:02:01.297  [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:01.297  [683/707] Linking static target lib/librte_vhost.a
00:02:01.556  [684/707] Linking target app/dpdk-pdump
00:02:01.556  [685/707] Linking target app/dpdk-test-gpudev
00:02:01.556  [686/707] Linking target app/dpdk-test-dma-perf
00:02:01.814  [687/707] Linking target app/dpdk-dumpcap
00:02:01.814  [688/707] Linking target app/dpdk-test-acl
00:02:01.814  [689/707] Linking target app/dpdk-test-cmdline
00:02:01.814  [690/707] Linking target app/dpdk-test-regex
00:02:01.814  [691/707] Linking target app/dpdk-test-mldev
00:02:01.814  [692/707] Linking target app/dpdk-test-flow-perf
00:02:01.814  [693/707] Linking target app/dpdk-test-security-perf
00:02:01.814  [694/707] Linking target app/dpdk-proc-info
00:02:01.814  [695/707] Linking target app/dpdk-test-pipeline
00:02:01.814  [696/707] Linking target app/dpdk-test-compress-perf
00:02:01.814  [697/707] Linking target app/dpdk-test-sad
00:02:01.814  [698/707] Linking target app/dpdk-test-fib
00:02:01.814  [699/707] Linking target app/dpdk-test-bbdev
00:02:01.814  [700/707] Linking target app/dpdk-graph
00:02:01.814  [701/707] Linking target app/dpdk-test-crypto-perf
00:02:01.814  [702/707] Linking target app/dpdk-test-eventdev
00:02:01.814  [703/707] Linking target app/dpdk-testpmd
00:02:03.718  [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:03.718  [705/707] Linking target lib/librte_vhost.so.24.0
00:02:06.255  [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.515  [707/707] Linking target lib/librte_pipeline.so.24.0
00:02:06.515   00:32:55	-- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp -j72 install
00:02:06.515  ninja: Entering directory `/var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp'
00:02:06.515  [0/1] Installing files.
00:02:07.089  Installing subdir /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/timer
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/timer
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:07.089  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/service_cores
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/service_cores
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/dma
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/dma
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast
00:02:07.090  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vmdq
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vmdq
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bond
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bond
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bond
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/common
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/common/sse
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/common/neon
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/skeleton
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/skeleton
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.091  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/cmdline
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vdpa
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:02:07.092  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vhost
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/helloworld
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/helloworld
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/distributor
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/distributor
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.093  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ntb
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ntb
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/ntb
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/bpf
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb
00:02:07.094  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb
00:02:07.094  Installing lib/librte_log.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_eal.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_ring.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_rcu.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_mempool.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_net.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_meter.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_pci.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_metrics.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_hash.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_timer.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_acl.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_bpf.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_distributor.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_efd.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_gro.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_gso.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_lpm.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_member.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.094  Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_power.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_mldev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_rib.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_reorder.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_sched.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_security.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_stack.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_vhost.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_fib.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_port.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_pdump.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_table.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_graph.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_node.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:02:07.095  Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:02:07.095  Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:02:07.095  Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:07.095  Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0
00:02:07.095  Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.095  Installing app/dpdk-graph to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.095  Installing app/dpdk-pdump to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.095  Installing app/dpdk-proc-info to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.095  Installing app/dpdk-test-acl to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-fib to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-testpmd to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-regex to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-sad to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include/generic
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.355  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.356  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.618  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.618  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.618  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.619  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.620  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.621  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/bin
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/pkgconfig
00:02:07.622  Installing /var/jenkins/workspace/nvme-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/pkgconfig
00:02:07.622  Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_log.so.24
00:02:07.622  Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_log.so
00:02:07.622  Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_kvargs.so.24
00:02:07.622  Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_kvargs.so
00:02:07.622  Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_telemetry.so.24
00:02:07.622  Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_telemetry.so
00:02:07.622  Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_eal.so.24
00:02:07.622  Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_eal.so
00:02:07.622  Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ring.so.24
00:02:07.622  Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ring.so
00:02:07.622  Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_rcu.so.24
00:02:07.622  Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_rcu.so
00:02:07.622  Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_mempool.so.24
00:02:07.622  Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_mempool.so
00:02:07.622  Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_mbuf.so.24
00:02:07.622  Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_mbuf.so
00:02:07.622  Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_net.so.24
00:02:07.622  Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_net.so
00:02:07.622  Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_meter.so.24
00:02:07.622  Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_meter.so
00:02:07.622  Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ethdev.so.24
00:02:07.622  Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ethdev.so
00:02:07.622  Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pci.so.24
00:02:07.622  Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pci.so
00:02:07.622  Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_cmdline.so.24
00:02:07.622  Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_cmdline.so
00:02:07.622  Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_metrics.so.24
00:02:07.622  Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_metrics.so
00:02:07.622  Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_hash.so.24
00:02:07.622  Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_hash.so
00:02:07.622  Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_timer.so.24
00:02:07.622  Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_timer.so
00:02:07.622  Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_acl.so.24
00:02:07.622  Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_acl.so
00:02:07.622  Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_bbdev.so.24
00:02:07.622  Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_bbdev.so
00:02:07.622  Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24
00:02:07.622  Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_bitratestats.so
00:02:07.622  Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_bpf.so.24
00:02:07.622  Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_bpf.so
00:02:07.622  Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24
00:02:07.622  Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_cfgfile.so
00:02:07.622  Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_compressdev.so.24
00:02:07.622  Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_compressdev.so
00:02:07.622  Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24
00:02:07.622  Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_cryptodev.so
00:02:07.622  Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_distributor.so.24
00:02:07.622  Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_distributor.so
00:02:07.622  Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_dmadev.so.24
00:02:07.622  Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_dmadev.so
00:02:07.622  Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_efd.so.24
00:02:07.622  Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_efd.so
00:02:07.622  Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_eventdev.so.24
00:02:07.622  Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_eventdev.so
00:02:07.622  Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24
00:02:07.622  Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_dispatcher.so
00:02:07.622  Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_gpudev.so.24
00:02:07.622  Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_gpudev.so
00:02:07.622  Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_gro.so.24
00:02:07.622  Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_gro.so
00:02:07.622  Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_gso.so.24
00:02:07.622  Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_gso.so
00:02:07.622  Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24
00:02:07.622  Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ip_frag.so
00:02:07.622  Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_jobstats.so.24
00:02:07.622  './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so'
00:02:07.622  './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24'
00:02:07.622  './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0'
00:02:07.622  './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so'
00:02:07.622  './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24'
00:02:07.622  './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0'
00:02:07.622  './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so'
00:02:07.622  './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24'
00:02:07.622  './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0'
00:02:07.622  './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so'
00:02:07.622  './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24'
00:02:07.622  './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0'
00:02:07.622  Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_jobstats.so
00:02:07.622  Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_latencystats.so.24
00:02:07.622  Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_latencystats.so
00:02:07.622  Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_lpm.so.24
00:02:07.622  Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_lpm.so
00:02:07.622  Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_member.so.24
00:02:07.622  Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_member.so
00:02:07.622  Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pcapng.so.24
00:02:07.622  Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pcapng.so
00:02:07.622  Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_power.so.24
00:02:07.623  Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_power.so
00:02:07.623  Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_rawdev.so.24
00:02:07.623  Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_rawdev.so
00:02:07.623  Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_regexdev.so.24
00:02:07.623  Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_regexdev.so
00:02:07.623  Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_mldev.so.24
00:02:07.623  Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_mldev.so
00:02:07.623  Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_rib.so.24
00:02:07.623  Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_rib.so
00:02:07.623  Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_reorder.so.24
00:02:07.623  Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_reorder.so
00:02:07.623  Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_sched.so.24
00:02:07.623  Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_sched.so
00:02:07.623  Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_security.so.24
00:02:07.623  Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_security.so
00:02:07.623  Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_stack.so.24
00:02:07.623  Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_stack.so
00:02:07.623  Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_vhost.so.24
00:02:07.623  Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_vhost.so
00:02:07.623  Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ipsec.so.24
00:02:07.623  Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_ipsec.so
00:02:07.623  Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pdcp.so.24
00:02:07.623  Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pdcp.so
00:02:07.623  Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_fib.so.24
00:02:07.623  Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_fib.so
00:02:07.623  Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_port.so.24
00:02:07.623  Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_port.so
00:02:07.623  Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pdump.so.24
00:02:07.623  Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pdump.so
00:02:07.623  Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_table.so.24
00:02:07.623  Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_table.so
00:02:07.623  Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pipeline.so.24
00:02:07.623  Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_pipeline.so
00:02:07.623  Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_graph.so.24
00:02:07.623  Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_graph.so
00:02:07.623  Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_node.so.24
00:02:07.623  Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/librte_node.so
00:02:07.623  Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24
00:02:07.623  Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:02:07.623  Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24
00:02:07.623  Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:02:07.623  Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24
00:02:07.623  Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:02:07.623  Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24
00:02:07.623  Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:02:07.623  Running custom install script '/bin/sh /var/jenkins/workspace/nvme-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0'
00:02:07.623    00:32:56	-- common/autobuild_common.sh@192 -- $ uname -s
00:02:07.623   00:32:56	-- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:02:07.623   00:32:56	-- common/autobuild_common.sh@203 -- $ cat
00:02:07.623   00:32:56	-- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:02:07.623  
00:02:07.623  real	0m37.408s
00:02:07.623  user	10m5.612s
00:02:07.623  sys	2m11.484s
00:02:07.623   00:32:56	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:02:07.623   00:32:56	-- common/autotest_common.sh@10 -- $ set +x
00:02:07.623  ************************************
00:02:07.623  END TEST build_native_dpdk
00:02:07.623  ************************************
00:02:07.623   00:32:56	-- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:07.623   00:32:56	-- spdk/autobuild.sh@47 -- $ [[ 1 -eq 1 ]]
00:02:07.623   00:32:56	-- spdk/autobuild.sh@48 -- $ ocf_precompile
00:02:07.623   00:32:56	-- common/autobuild_common.sh@424 -- $ run_test autobuild_ocf_precompile _ocf_precompile
00:02:07.623   00:32:56	-- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']'
00:02:07.623   00:32:56	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:02:07.623   00:32:56	-- common/autotest_common.sh@10 -- $ set +x
00:02:07.623  ************************************
00:02:07.623  START TEST autobuild_ocf_precompile
00:02:07.623  ************************************
00:02:07.623   00:32:56	-- common/autotest_common.sh@1114 -- $ _ocf_precompile
00:02:07.623    00:32:56	-- common/autobuild_common.sh@21 -- $ echo --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:02:07.623    00:32:56	-- common/autobuild_common.sh@21 -- $ sed s/--enable-coverage//g
00:02:07.623   00:32:56	-- common/autobuild_common.sh@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:02:07.882  Using /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/pkgconfig for additional libs...
00:02:08.142  DPDK libraries: /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:08.142  DPDK includes: //var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:08.142  Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk
00:02:08.401  Using 'verbs' RDMA provider
00:02:23.855  Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done.
00:02:36.065  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:02:36.631  Creating mk/config.mk...done.
00:02:36.631  Creating mk/cc.flags.mk...done.
00:02:36.631  Type 'make' to build.
00:02:36.631   00:33:25	-- common/autobuild_common.sh@22 -- $ make -j72 include/spdk/config.h
00:02:36.631   00:33:25	-- common/autobuild_common.sh@23 -- $ CC=gcc
00:02:36.631   00:33:25	-- common/autobuild_common.sh@23 -- $ CCAR=ar
00:02:36.631   00:33:25	-- common/autobuild_common.sh@23 -- $ make -j72 -C /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf exportlib O=/var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a
00:02:36.631  make: Entering directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf'
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_ctx.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_metadata.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_queue.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/promotion/nhit.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_core.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_composite_volume.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_debug.h
00:02:36.890   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_mngt.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/acp.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/alru.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_err.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_types.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io_class.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_stats.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cleaner.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cache.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_def.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_volume.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_logger.h
00:02:37.149   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cfg.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume_priv.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io_allocator.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_stats.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.c
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_structs.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.h
00:02:37.408   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/ops.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_builder.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_misc.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_cache.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_io_class.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_flush.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_metadata.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_status.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_internal.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_bit.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment_id.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_common.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_structs.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition_structs.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cache_line.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru_structs.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop_structs.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_ops.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp_structs.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache_priv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_ops.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.c
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_ops.h
00:02:37.409   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_debug.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx_priv.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_priv.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_priv.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_def_priv.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_class.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.c
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.h
00:02:37.669   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.c
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.h
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.c
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.h
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache.c
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.c
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru_structs.h
00:02:37.670   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume_priv.h
00:02:37.933    CC env_ocf/mpool.o
00:02:37.933    CC env_ocf/ocf_env.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_pipeline.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_alock.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_async_lock.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_cache_line.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_realloc.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_rbtree.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_list.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_generator.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_user_part.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_parallelize.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_cleaner.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_io.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_request.o
00:02:37.933    CC env_ocf/src/ocf/ocf_volume.o
00:02:37.933    CC env_ocf/src/ocf/utils/utils_refcnt.o
00:02:37.933    CC env_ocf/src/ocf/promotion/nhit/nhit_hash.o
00:02:37.933    CC env_ocf/src/ocf/promotion/nhit/nhit.o
00:02:37.933    CC env_ocf/src/ocf/promotion/promotion.o
00:02:37.933    CC env_ocf/src/ocf/ocf_queue.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_misc.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_cache.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_common.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_core_pool.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_io_class.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_core.o
00:02:37.933    CC env_ocf/src/ocf/mngt/ocf_mngt_flush.o
00:02:37.933    CC env_ocf/src/ocf/ocf_stats_builder.o
00:02:37.933    CC env_ocf/src/ocf/ocf_logger.o
00:02:37.933    CC env_ocf/src/ocf/ocf_metadata.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_raw.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_eviction_policy.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_segment.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_collision.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_partition.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_raw_dynamic.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_misc.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_raw_atomic.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_superblock.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_raw_volatile.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_cleaning_policy.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_io.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_core.o
00:02:37.933    CC env_ocf/src/ocf/metadata/metadata_passive_update.o
00:02:37.933    CC env_ocf/src/ocf/cleaning/nop.o
00:02:37.933    CC env_ocf/src/ocf/cleaning/acp.o
00:02:37.933    CC env_ocf/src/ocf/cleaning/alru.o
00:02:37.933    CC env_ocf/src/ocf/ocf_seq_cutoff.o
00:02:37.933    CC env_ocf/src/ocf/cleaning/cleaning.o
00:02:37.933    CC env_ocf/src/ocf/ocf_core.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_fast.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_ops.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_bf.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_wo.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_discard.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_inv.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_wi.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_wt.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_zero.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_wa.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_common.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_d2c.o
00:02:37.933    CC env_ocf/src/ocf/engine/cache_engine.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_wb.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_pt.o
00:02:37.933    CC env_ocf/src/ocf/engine/engine_rd.o
00:02:37.933    CC env_ocf/src/ocf/ocf_io.o
00:02:37.933    CC env_ocf/src/ocf/ocf_ctx.o
00:02:37.933    CC env_ocf/src/ocf/ocf_space.o
00:02:37.933    CC env_ocf/src/ocf/ocf_stats.o
00:02:38.193    CC env_ocf/src/ocf/ocf_lru.o
00:02:38.193    CC env_ocf/src/ocf/ocf_composite_volume.o
00:02:38.193    CC env_ocf/src/ocf/concurrency/ocf_mio_concurrency.o
00:02:38.193    CC env_ocf/src/ocf/concurrency/ocf_concurrency.o
00:02:38.193    CC env_ocf/src/ocf/concurrency/ocf_pio_concurrency.o
00:02:38.193    CC env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.o
00:02:38.193    CC env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.o
00:02:38.193    CC env_ocf/src/ocf/ocf_io_class.o
00:02:38.193    CC env_ocf/src/ocf/ocf_cache.o
00:02:38.452    CC env_ocf/src/ocf/ocf_request.o
00:02:39.019    LIB libspdk_ocfenv.a
00:02:39.278  cp /var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a /var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a
00:02:39.278  make: Leaving directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf'
00:02:39.278   00:33:28	-- common/autobuild_common.sh@25 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a'
00:02:39.278   00:33:28	-- common/autobuild_common.sh@27 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a
00:02:39.538  Using /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/pkgconfig for additional libs...
00:02:39.538  DPDK libraries: /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:02:39.538  DPDK includes: //var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:02:39.538  Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk
00:02:40.229  Using 'verbs' RDMA provider
00:02:52.775  Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done.
00:03:04.989  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:03:04.989  Creating mk/config.mk...done.
00:03:04.989  Creating mk/cc.flags.mk...done.
00:03:04.989  Type 'make' to build.
00:03:04.989  
00:03:04.989  real	0m56.633s
00:03:04.989  user	0m53.663s
00:03:04.989  sys	0m41.760s
00:03:04.989   00:33:53	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:03:04.989   00:33:53	-- common/autotest_common.sh@10 -- $ set +x
00:03:04.989  ************************************
00:03:04.989  END TEST autobuild_ocf_precompile
00:03:04.989  ************************************
00:03:04.989   00:33:53	-- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:03:04.989   00:33:53	-- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:03:04.989   00:33:53	-- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:03:04.989   00:33:53	-- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:03:04.989   00:33:53	-- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:03:04.989   00:33:53	-- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a --with-shared
00:03:04.989  Using /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/pkgconfig for additional libs...
00:03:04.989  DPDK libraries: /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib
00:03:04.989  DPDK includes: //var/jenkins/workspace/nvme-phy-autotest/dpdk/build/include
00:03:04.989  Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk
00:03:04.989  Using 'verbs' RDMA provider
00:03:18.154  Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done.
00:03:30.373  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:03:30.373  Creating mk/config.mk...done.
00:03:30.373  Creating mk/cc.flags.mk...done.
00:03:30.373  Type 'make' to build.
00:03:30.373   00:34:18	-- spdk/autobuild.sh@69 -- $ run_test make make -j72
00:03:30.373   00:34:18	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:03:30.373   00:34:18	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:03:30.373   00:34:18	-- common/autotest_common.sh@10 -- $ set +x
00:03:30.373  ************************************
00:03:30.373  START TEST make
00:03:30.373  ************************************
00:03:30.373   00:34:18	-- common/autotest_common.sh@1114 -- $ make -j72
00:03:30.373  make[1]: Nothing to be done for 'all'.
00:03:45.283    CC lib/ut/ut.o
00:03:45.283    CC lib/log/log.o
00:03:45.283    CC lib/log/log_flags.o
00:03:45.283    CC lib/log/log_deprecated.o
00:03:45.283    CC lib/ut_mock/mock.o
00:03:45.283  make[3]: '/var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a' is up to date.
00:03:45.283    LIB libspdk_ut_mock.a
00:03:45.283    LIB libspdk_ut.a
00:03:45.283    LIB libspdk_log.a
00:03:45.283    SO libspdk_ut_mock.so.5.0
00:03:45.283    SO libspdk_ut.so.1.0
00:03:45.283    SO libspdk_log.so.6.1
00:03:45.283    SYMLINK libspdk_ut_mock.so
00:03:45.283    SYMLINK libspdk_ut.so
00:03:45.283    SYMLINK libspdk_log.so
00:03:45.283    CC lib/ioat/ioat.o
00:03:45.283    CXX lib/trace_parser/trace.o
00:03:45.283    CC lib/dma/dma.o
00:03:45.283    CC lib/util/base64.o
00:03:45.283    CC lib/util/bit_array.o
00:03:45.283    CC lib/util/crc16.o
00:03:45.283    CC lib/util/cpuset.o
00:03:45.283    CC lib/util/crc32.o
00:03:45.283    CC lib/util/crc32c.o
00:03:45.283    CC lib/util/crc32_ieee.o
00:03:45.283    CC lib/util/crc64.o
00:03:45.283    CC lib/util/dif.o
00:03:45.283    CC lib/util/fd.o
00:03:45.283    CC lib/util/math.o
00:03:45.283    CC lib/util/file.o
00:03:45.283    CC lib/util/hexlify.o
00:03:45.283    CC lib/util/iov.o
00:03:45.283    CC lib/util/string.o
00:03:45.283    CC lib/util/pipe.o
00:03:45.283    CC lib/util/strerror_tls.o
00:03:45.283    CC lib/util/xor.o
00:03:45.283    CC lib/util/uuid.o
00:03:45.283    CC lib/util/fd_group.o
00:03:45.283    CC lib/util/zipf.o
00:03:45.283    CC lib/vfio_user/host/vfio_user_pci.o
00:03:45.283    CC lib/vfio_user/host/vfio_user.o
00:03:45.283    LIB libspdk_dma.a
00:03:45.283    SO libspdk_dma.so.3.0
00:03:45.283    LIB libspdk_ioat.a
00:03:45.283    SYMLINK libspdk_dma.so
00:03:45.283    SO libspdk_ioat.so.6.0
00:03:45.283    SYMLINK libspdk_ioat.so
00:03:45.283    LIB libspdk_vfio_user.a
00:03:45.283    SO libspdk_vfio_user.so.4.0
00:03:45.283    LIB libspdk_util.a
00:03:45.283    SYMLINK libspdk_vfio_user.so
00:03:45.542    SO libspdk_util.so.8.0
00:03:45.542    SYMLINK libspdk_util.so
00:03:45.801    CC lib/conf/conf.o
00:03:45.801    LIB libspdk_trace_parser.a
00:03:45.801    CC lib/vmd/vmd.o
00:03:45.801    CC lib/vmd/led.o
00:03:45.801    CC lib/json/json_parse.o
00:03:45.801    CC lib/json/json_util.o
00:03:45.801    CC lib/idxd/idxd.o
00:03:45.801    CC lib/json/json_write.o
00:03:45.801    CC lib/idxd/idxd_user.o
00:03:45.801    CC lib/idxd/idxd_kernel.o
00:03:45.801    CC lib/env_dpdk/env.o
00:03:45.801    CC lib/env_dpdk/memory.o
00:03:45.801    CC lib/rdma/common.o
00:03:45.801    CC lib/env_dpdk/pci.o
00:03:45.801    CC lib/rdma/rdma_verbs.o
00:03:45.801    CC lib/env_dpdk/init.o
00:03:45.801    CC lib/env_dpdk/threads.o
00:03:45.801    CC lib/env_dpdk/pci_ioat.o
00:03:45.801    CC lib/env_dpdk/pci_virtio.o
00:03:45.801    CC lib/env_dpdk/pci_vmd.o
00:03:45.801    CC lib/env_dpdk/pci_idxd.o
00:03:45.801    CC lib/env_dpdk/pci_event.o
00:03:45.801    CC lib/env_dpdk/sigbus_handler.o
00:03:45.801    CC lib/env_dpdk/pci_dpdk.o
00:03:45.801    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:45.801    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:45.801    SO libspdk_trace_parser.so.4.0
00:03:46.062    SYMLINK libspdk_trace_parser.so
00:03:46.062    LIB libspdk_conf.a
00:03:46.062    SO libspdk_conf.so.5.0
00:03:46.062    LIB libspdk_json.a
00:03:46.062    SYMLINK libspdk_conf.so
00:03:46.062    LIB libspdk_rdma.a
00:03:46.062    SO libspdk_json.so.5.1
00:03:46.062    SO libspdk_rdma.so.5.0
00:03:46.320    SYMLINK libspdk_json.so
00:03:46.320    SYMLINK libspdk_rdma.so
00:03:46.320    LIB libspdk_idxd.a
00:03:46.320    SO libspdk_idxd.so.11.0
00:03:46.579    CC lib/jsonrpc/jsonrpc_server.o
00:03:46.579    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:46.579    CC lib/jsonrpc/jsonrpc_client.o
00:03:46.579    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:46.579    SYMLINK libspdk_idxd.so
00:03:46.579    LIB libspdk_vmd.a
00:03:46.579    SO libspdk_vmd.so.5.0
00:03:46.579    SYMLINK libspdk_vmd.so
00:03:46.839    LIB libspdk_jsonrpc.a
00:03:46.839    SO libspdk_jsonrpc.so.5.1
00:03:46.839    SYMLINK libspdk_jsonrpc.so
00:03:47.098    CC lib/rpc/rpc.o
00:03:47.098    LIB libspdk_env_dpdk.a
00:03:47.358    SO libspdk_env_dpdk.so.13.0
00:03:47.358    LIB libspdk_rpc.a
00:03:47.358    SO libspdk_rpc.so.5.0
00:03:47.358    SYMLINK libspdk_env_dpdk.so
00:03:47.358    SYMLINK libspdk_rpc.so
00:03:47.617    CC lib/sock/sock.o
00:03:47.617    CC lib/sock/sock_rpc.o
00:03:47.617    CC lib/trace/trace.o
00:03:47.617    CC lib/notify/notify.o
00:03:47.617    CC lib/trace/trace_flags.o
00:03:47.617    CC lib/notify/notify_rpc.o
00:03:47.617    CC lib/trace/trace_rpc.o
00:03:47.876    LIB libspdk_notify.a
00:03:47.876    LIB libspdk_trace.a
00:03:47.876    SO libspdk_notify.so.5.0
00:03:47.876    SO libspdk_trace.so.9.0
00:03:47.876    SYMLINK libspdk_notify.so
00:03:48.135    SYMLINK libspdk_trace.so
00:03:48.135    LIB libspdk_sock.a
00:03:48.135    SO libspdk_sock.so.8.0
00:03:48.135    SYMLINK libspdk_sock.so
00:03:48.135    CC lib/thread/iobuf.o
00:03:48.135    CC lib/thread/thread.o
00:03:48.394    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:48.394    CC lib/nvme/nvme_ctrlr.o
00:03:48.394    CC lib/nvme/nvme_ns_cmd.o
00:03:48.394    CC lib/nvme/nvme_fabric.o
00:03:48.394    CC lib/nvme/nvme_ns.o
00:03:48.394    CC lib/nvme/nvme_pcie_common.o
00:03:48.394    CC lib/nvme/nvme_pcie.o
00:03:48.394    CC lib/nvme/nvme_qpair.o
00:03:48.394    CC lib/nvme/nvme.o
00:03:48.394    CC lib/nvme/nvme_quirks.o
00:03:48.394    CC lib/nvme/nvme_transport.o
00:03:48.394    CC lib/nvme/nvme_discovery.o
00:03:48.394    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:48.394    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:48.394    CC lib/nvme/nvme_tcp.o
00:03:48.394    CC lib/nvme/nvme_opal.o
00:03:48.394    CC lib/nvme/nvme_io_msg.o
00:03:48.394    CC lib/nvme/nvme_cuse.o
00:03:48.394    CC lib/nvme/nvme_poll_group.o
00:03:48.394    CC lib/nvme/nvme_zns.o
00:03:48.394    CC lib/nvme/nvme_vfio_user.o
00:03:48.394    CC lib/nvme/nvme_rdma.o
00:03:49.769    LIB libspdk_thread.a
00:03:49.769    SO libspdk_thread.so.9.0
00:03:50.028    SYMLINK libspdk_thread.so
00:03:50.286    CC lib/blob/request.o
00:03:50.286    CC lib/blob/blobstore.o
00:03:50.286    CC lib/blob/zeroes.o
00:03:50.286    CC lib/blob/blob_bs_dev.o
00:03:50.286    CC lib/accel/accel_rpc.o
00:03:50.286    CC lib/virtio/virtio.o
00:03:50.286    CC lib/accel/accel.o
00:03:50.286    CC lib/virtio/virtio_vhost_user.o
00:03:50.286    CC lib/virtio/virtio_vfio_user.o
00:03:50.286    CC lib/accel/accel_sw.o
00:03:50.286    CC lib/virtio/virtio_pci.o
00:03:50.286    CC lib/init/json_config.o
00:03:50.286    CC lib/init/subsystem.o
00:03:50.286    CC lib/init/subsystem_rpc.o
00:03:50.286    CC lib/init/rpc.o
00:03:50.546    LIB libspdk_init.a
00:03:50.546    SO libspdk_init.so.4.0
00:03:50.546    SYMLINK libspdk_init.so
00:03:50.546    LIB libspdk_nvme.a
00:03:50.546    LIB libspdk_virtio.a
00:03:50.546    SO libspdk_virtio.so.6.0
00:03:50.804    SYMLINK libspdk_virtio.so
00:03:50.804    SO libspdk_nvme.so.12.0
00:03:50.804    CC lib/event/app.o
00:03:50.804    CC lib/event/reactor.o
00:03:50.804    CC lib/event/log_rpc.o
00:03:50.804    CC lib/event/app_rpc.o
00:03:50.804    CC lib/event/scheduler_static.o
00:03:51.062    SYMLINK libspdk_nvme.so
00:03:51.321    LIB libspdk_event.a
00:03:51.321    LIB libspdk_accel.a
00:03:51.321    SO libspdk_event.so.12.0
00:03:51.321    SO libspdk_accel.so.14.0
00:03:51.321    SYMLINK libspdk_event.so
00:03:51.321    SYMLINK libspdk_accel.so
00:03:51.579    CC lib/bdev/bdev.o
00:03:51.579    CC lib/bdev/bdev_rpc.o
00:03:51.579    CC lib/bdev/bdev_zone.o
00:03:51.579    CC lib/bdev/scsi_nvme.o
00:03:51.579    CC lib/bdev/part.o
00:03:52.952    LIB libspdk_blob.a
00:03:53.210    SO libspdk_blob.so.10.1
00:03:53.211    SYMLINK libspdk_blob.so
00:03:53.469    CC lib/lvol/lvol.o
00:03:53.469    CC lib/blobfs/blobfs.o
00:03:53.469    CC lib/blobfs/tree.o
00:03:54.405    LIB libspdk_bdev.a
00:03:54.405    SO libspdk_bdev.so.14.0
00:03:54.405    LIB libspdk_blobfs.a
00:03:54.405    SO libspdk_blobfs.so.9.0
00:03:54.405    LIB libspdk_lvol.a
00:03:54.405    SYMLINK libspdk_bdev.so
00:03:54.405    SO libspdk_lvol.so.9.1
00:03:54.405    SYMLINK libspdk_blobfs.so
00:03:54.405    SYMLINK libspdk_lvol.so
00:03:54.664    CC lib/nvmf/ctrlr.o
00:03:54.664    CC lib/nvmf/ctrlr_discovery.o
00:03:54.664    CC lib/nbd/nbd.o
00:03:54.664    CC lib/nvmf/subsystem.o
00:03:54.664    CC lib/nvmf/ctrlr_bdev.o
00:03:54.664    CC lib/nbd/nbd_rpc.o
00:03:54.664    CC lib/nvmf/nvmf.o
00:03:54.664    CC lib/nvmf/transport.o
00:03:54.664    CC lib/nvmf/nvmf_rpc.o
00:03:54.664    CC lib/ftl/ftl_init.o
00:03:54.664    CC lib/ftl/ftl_core.o
00:03:54.664    CC lib/ftl/ftl_layout.o
00:03:54.664    CC lib/ftl/ftl_debug.o
00:03:54.664    CC lib/scsi/lun.o
00:03:54.664    CC lib/nvmf/tcp.o
00:03:54.664    CC lib/ftl/ftl_io.o
00:03:54.664    CC lib/nvmf/rdma.o
00:03:54.664    CC lib/scsi/port.o
00:03:54.664    CC lib/ftl/ftl_l2p.o
00:03:54.664    CC lib/ftl/ftl_sb.o
00:03:54.664    CC lib/scsi/dev.o
00:03:54.664    CC lib/scsi/scsi.o
00:03:54.664    CC lib/ftl/ftl_l2p_flat.o
00:03:54.664    CC lib/ublk/ublk.o
00:03:54.665    CC lib/ublk/ublk_rpc.o
00:03:54.665    CC lib/ftl/ftl_nv_cache.o
00:03:54.665    CC lib/ftl/ftl_band.o
00:03:54.665    CC lib/scsi/scsi_bdev.o
00:03:54.665    CC lib/ftl/ftl_writer.o
00:03:54.665    CC lib/ftl/ftl_band_ops.o
00:03:54.665    CC lib/scsi/scsi_pr.o
00:03:54.665    CC lib/scsi/scsi_rpc.o
00:03:54.665    CC lib/ftl/ftl_rq.o
00:03:54.665    CC lib/scsi/task.o
00:03:54.665    CC lib/ftl/ftl_reloc.o
00:03:54.665    CC lib/ftl/ftl_l2p_cache.o
00:03:54.665    CC lib/ftl/ftl_p2l.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:54.665    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:54.665    CC lib/ftl/utils/ftl_conf.o
00:03:54.665    CC lib/ftl/utils/ftl_md.o
00:03:54.665    CC lib/ftl/utils/ftl_mempool.o
00:03:54.665    CC lib/ftl/utils/ftl_property.o
00:03:54.665    CC lib/ftl/utils/ftl_bitmap.o
00:03:54.665    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:54.665    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:54.665    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:54.665    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:54.665    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:54.665    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:54.665    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:54.665    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:54.665    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:54.665    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:54.665    CC lib/ftl/base/ftl_base_dev.o
00:03:54.665    CC lib/ftl/base/ftl_base_bdev.o
00:03:54.665    CC lib/ftl/ftl_trace.o
00:03:55.232    LIB libspdk_nbd.a
00:03:55.232    SO libspdk_nbd.so.6.0
00:03:55.232    SYMLINK libspdk_nbd.so
00:03:55.232    LIB libspdk_ublk.a
00:03:55.232    LIB libspdk_scsi.a
00:03:55.491    SO libspdk_ublk.so.2.0
00:03:55.491    SO libspdk_scsi.so.8.0
00:03:55.491    SYMLINK libspdk_ublk.so
00:03:55.491    SYMLINK libspdk_scsi.so
00:03:55.750    LIB libspdk_ftl.a
00:03:55.750    CC lib/iscsi/conn.o
00:03:55.750    CC lib/iscsi/init_grp.o
00:03:55.750    CC lib/iscsi/iscsi.o
00:03:55.750    CC lib/iscsi/md5.o
00:03:55.750    CC lib/iscsi/param.o
00:03:55.750    CC lib/iscsi/portal_grp.o
00:03:55.750    CC lib/iscsi/tgt_node.o
00:03:55.750    CC lib/iscsi/iscsi_subsystem.o
00:03:55.750    CC lib/vhost/vhost.o
00:03:55.750    CC lib/iscsi/iscsi_rpc.o
00:03:55.750    CC lib/vhost/vhost_rpc.o
00:03:55.750    CC lib/vhost/vhost_scsi.o
00:03:55.750    CC lib/iscsi/task.o
00:03:55.750    CC lib/vhost/vhost_blk.o
00:03:55.750    CC lib/vhost/rte_vhost_user.o
00:03:56.011    SO libspdk_ftl.so.8.0
00:03:56.271    SYMLINK libspdk_ftl.so
00:03:56.839    LIB libspdk_nvmf.a
00:03:56.839    LIB libspdk_vhost.a
00:03:56.839    SO libspdk_nvmf.so.17.0
00:03:56.839    SO libspdk_vhost.so.7.1
00:03:57.097    SYMLINK libspdk_vhost.so
00:03:57.097    SYMLINK libspdk_nvmf.so
00:03:57.097    LIB libspdk_iscsi.a
00:03:57.097    SO libspdk_iscsi.so.7.0
00:03:57.357    SYMLINK libspdk_iscsi.so
00:03:57.616    CC module/env_dpdk/env_dpdk_rpc.o
00:03:57.875    CC module/sock/posix/posix.o
00:03:57.875    CC module/accel/error/accel_error.o
00:03:57.875    CC module/accel/error/accel_error_rpc.o
00:03:57.875    CC module/accel/iaa/accel_iaa.o
00:03:57.875    CC module/accel/iaa/accel_iaa_rpc.o
00:03:57.875    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:57.875    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:57.875    CC module/blob/bdev/blob_bdev.o
00:03:57.875    CC module/accel/ioat/accel_ioat.o
00:03:57.875    CC module/scheduler/gscheduler/gscheduler.o
00:03:57.875    CC module/accel/ioat/accel_ioat_rpc.o
00:03:57.875    CC module/accel/dsa/accel_dsa.o
00:03:57.875    CC module/accel/dsa/accel_dsa_rpc.o
00:03:57.875    LIB libspdk_env_dpdk_rpc.a
00:03:57.875    SO libspdk_env_dpdk_rpc.so.5.0
00:03:57.875    SYMLINK libspdk_env_dpdk_rpc.so
00:03:58.133    LIB libspdk_scheduler_gscheduler.a
00:03:58.133    LIB libspdk_scheduler_dpdk_governor.a
00:03:58.133    LIB libspdk_accel_error.a
00:03:58.133    LIB libspdk_accel_ioat.a
00:03:58.133    LIB libspdk_scheduler_dynamic.a
00:03:58.133    SO libspdk_scheduler_gscheduler.so.3.0
00:03:58.133    LIB libspdk_accel_iaa.a
00:03:58.133    SO libspdk_scheduler_dpdk_governor.so.3.0
00:03:58.133    SO libspdk_accel_error.so.1.0
00:03:58.133    SO libspdk_scheduler_dynamic.so.3.0
00:03:58.133    SO libspdk_accel_iaa.so.2.0
00:03:58.133    SO libspdk_accel_ioat.so.5.0
00:03:58.133    SYMLINK libspdk_scheduler_gscheduler.so
00:03:58.133    SYMLINK libspdk_scheduler_dpdk_governor.so
00:03:58.133    SYMLINK libspdk_accel_error.so
00:03:58.133    LIB libspdk_accel_dsa.a
00:03:58.133    LIB libspdk_blob_bdev.a
00:03:58.133    SYMLINK libspdk_scheduler_dynamic.so
00:03:58.133    SYMLINK libspdk_accel_iaa.so
00:03:58.133    SYMLINK libspdk_accel_ioat.so
00:03:58.133    SO libspdk_blob_bdev.so.10.1
00:03:58.133    SO libspdk_accel_dsa.so.4.0
00:03:58.133    SYMLINK libspdk_blob_bdev.so
00:03:58.392    SYMLINK libspdk_accel_dsa.so
00:03:58.650    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:58.650    CC module/bdev/malloc/bdev_malloc.o
00:03:58.650    CC module/bdev/raid/bdev_raid.o
00:03:58.650    CC module/bdev/raid/bdev_raid_rpc.o
00:03:58.650    CC module/bdev/gpt/gpt.o
00:03:58.650    CC module/bdev/gpt/vbdev_gpt.o
00:03:58.650    CC module/bdev/raid/bdev_raid_sb.o
00:03:58.650    CC module/bdev/raid/raid0.o
00:03:58.650    CC module/bdev/passthru/vbdev_passthru.o
00:03:58.650    LIB libspdk_sock_posix.a
00:03:58.650    CC module/bdev/raid/raid1.o
00:03:58.650    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:58.650    CC module/bdev/raid/concat.o
00:03:58.650    CC module/blobfs/bdev/blobfs_bdev.o
00:03:58.650    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:58.650    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:58.650    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:58.650    CC module/bdev/iscsi/bdev_iscsi.o
00:03:58.650    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:58.650    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:58.650    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:58.650    CC module/bdev/ftl/bdev_ftl.o
00:03:58.650    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:58.650    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:58.650    CC module/bdev/aio/bdev_aio.o
00:03:58.650    CC module/bdev/aio/bdev_aio_rpc.o
00:03:58.650    CC module/bdev/lvol/vbdev_lvol.o
00:03:58.650    CC module/bdev/delay/vbdev_delay.o
00:03:58.650    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:58.650    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:58.650    CC module/bdev/error/vbdev_error.o
00:03:58.650    CC module/bdev/nvme/bdev_nvme.o
00:03:58.650    CC module/bdev/null/bdev_null.o
00:03:58.650    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:58.650    CC module/bdev/error/vbdev_error_rpc.o
00:03:58.650    CC module/bdev/null/bdev_null_rpc.o
00:03:58.650    CC module/bdev/nvme/nvme_rpc.o
00:03:58.650    CC module/bdev/nvme/bdev_mdns_client.o
00:03:58.650    CC module/bdev/nvme/vbdev_opal.o
00:03:58.650    CC module/bdev/split/vbdev_split.o
00:03:58.650    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:58.650    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:58.650    CC module/bdev/split/vbdev_split_rpc.o
00:03:58.650    CC module/bdev/ocf/ctx.o
00:03:58.650    CC module/bdev/ocf/stats.o
00:03:58.650    CC module/bdev/ocf/data.o
00:03:58.650    CC module/bdev/ocf/vbdev_ocf.o
00:03:58.650    CC module/bdev/ocf/vbdev_ocf_rpc.o
00:03:58.650    CC module/bdev/ocf/utils.o
00:03:58.650    CC module/bdev/ocf/volume.o
00:03:58.650    SO libspdk_sock_posix.so.5.0
00:03:58.650    SYMLINK libspdk_sock_posix.so
00:03:58.909    LIB libspdk_bdev_split.a
00:03:58.909    LIB libspdk_blobfs_bdev.a
00:03:58.909    LIB libspdk_bdev_passthru.a
00:03:58.909    SO libspdk_bdev_split.so.5.0
00:03:58.909    SO libspdk_blobfs_bdev.so.5.0
00:03:58.909    LIB libspdk_bdev_ftl.a
00:03:58.909    SO libspdk_bdev_passthru.so.5.0
00:03:58.909    LIB libspdk_bdev_zone_block.a
00:03:59.167    LIB libspdk_bdev_gpt.a
00:03:59.167    SO libspdk_bdev_ftl.so.5.0
00:03:59.167    LIB libspdk_bdev_null.a
00:03:59.167    SYMLINK libspdk_bdev_split.so
00:03:59.167    SYMLINK libspdk_blobfs_bdev.so
00:03:59.167    SO libspdk_bdev_zone_block.so.5.0
00:03:59.167    SO libspdk_bdev_gpt.so.5.0
00:03:59.167    SYMLINK libspdk_bdev_passthru.so
00:03:59.167    LIB libspdk_bdev_error.a
00:03:59.167    SO libspdk_bdev_null.so.5.0
00:03:59.167    LIB libspdk_bdev_malloc.a
00:03:59.167    LIB libspdk_bdev_delay.a
00:03:59.167    SYMLINK libspdk_bdev_ftl.so
00:03:59.167    LIB libspdk_bdev_aio.a
00:03:59.167    SO libspdk_bdev_error.so.5.0
00:03:59.167    SYMLINK libspdk_bdev_zone_block.so
00:03:59.167    SO libspdk_bdev_malloc.so.5.0
00:03:59.167    SO libspdk_bdev_delay.so.5.0
00:03:59.167    LIB libspdk_bdev_iscsi.a
00:03:59.167    SYMLINK libspdk_bdev_gpt.so
00:03:59.167    SYMLINK libspdk_bdev_null.so
00:03:59.167    SO libspdk_bdev_aio.so.5.0
00:03:59.167    SO libspdk_bdev_iscsi.so.5.0
00:03:59.167    LIB libspdk_bdev_virtio.a
00:03:59.167    SYMLINK libspdk_bdev_error.so
00:03:59.167    SYMLINK libspdk_bdev_malloc.so
00:03:59.167    SYMLINK libspdk_bdev_delay.so
00:03:59.167    SYMLINK libspdk_bdev_aio.so
00:03:59.167    SO libspdk_bdev_virtio.so.5.0
00:03:59.167    SYMLINK libspdk_bdev_iscsi.so
00:03:59.167    LIB libspdk_bdev_ocf.a
00:03:59.426    LIB libspdk_bdev_lvol.a
00:03:59.426    SYMLINK libspdk_bdev_virtio.so
00:03:59.426    SO libspdk_bdev_lvol.so.5.0
00:03:59.426    SO libspdk_bdev_ocf.so.5.0
00:03:59.426    SYMLINK libspdk_bdev_lvol.so
00:03:59.426    SYMLINK libspdk_bdev_ocf.so
00:03:59.686    LIB libspdk_bdev_raid.a
00:03:59.686    SO libspdk_bdev_raid.so.5.0
00:03:59.686    SYMLINK libspdk_bdev_raid.so
00:04:01.066    LIB libspdk_bdev_nvme.a
00:04:01.066    SO libspdk_bdev_nvme.so.6.0
00:04:01.066    SYMLINK libspdk_bdev_nvme.so
00:04:01.634    CC module/event/subsystems/sock/sock.o
00:04:01.634    CC module/event/subsystems/vmd/vmd.o
00:04:01.634    CC module/event/subsystems/vmd/vmd_rpc.o
00:04:01.634    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:04:01.634    CC module/event/subsystems/scheduler/scheduler.o
00:04:01.634    CC module/event/subsystems/iobuf/iobuf.o
00:04:01.634    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:04:01.634    LIB libspdk_event_sock.a
00:04:01.634    LIB libspdk_event_vhost_blk.a
00:04:01.634    LIB libspdk_event_scheduler.a
00:04:01.634    LIB libspdk_event_vmd.a
00:04:01.634    SO libspdk_event_sock.so.4.0
00:04:01.894    SO libspdk_event_scheduler.so.3.0
00:04:01.894    SO libspdk_event_vhost_blk.so.2.0
00:04:01.894    LIB libspdk_event_iobuf.a
00:04:01.894    SO libspdk_event_vmd.so.5.0
00:04:01.894    SO libspdk_event_iobuf.so.2.0
00:04:01.894    SYMLINK libspdk_event_sock.so
00:04:01.894    SYMLINK libspdk_event_scheduler.so
00:04:01.894    SYMLINK libspdk_event_vhost_blk.so
00:04:01.894    SYMLINK libspdk_event_vmd.so
00:04:01.894    SYMLINK libspdk_event_iobuf.so
00:04:02.153    CC module/event/subsystems/accel/accel.o
00:04:02.412    LIB libspdk_event_accel.a
00:04:02.412    SO libspdk_event_accel.so.5.0
00:04:02.412    SYMLINK libspdk_event_accel.so
00:04:02.676    CC module/event/subsystems/bdev/bdev.o
00:04:03.038    LIB libspdk_event_bdev.a
00:04:03.038    SO libspdk_event_bdev.so.5.0
00:04:03.038    SYMLINK libspdk_event_bdev.so
00:04:03.318    CC module/event/subsystems/nbd/nbd.o
00:04:03.318    CC module/event/subsystems/scsi/scsi.o
00:04:03.318    CC module/event/subsystems/ublk/ublk.o
00:04:03.318    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:04:03.318    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:04:03.318    LIB libspdk_event_nbd.a
00:04:03.318    LIB libspdk_event_scsi.a
00:04:03.318    LIB libspdk_event_ublk.a
00:04:03.318    SO libspdk_event_nbd.so.5.0
00:04:03.318    SO libspdk_event_scsi.so.5.0
00:04:03.318    SO libspdk_event_ublk.so.2.0
00:04:03.589    SYMLINK libspdk_event_nbd.so
00:04:03.589    LIB libspdk_event_nvmf.a
00:04:03.589    SYMLINK libspdk_event_scsi.so
00:04:03.589    SYMLINK libspdk_event_ublk.so
00:04:03.589    SO libspdk_event_nvmf.so.5.0
00:04:03.589    SYMLINK libspdk_event_nvmf.so
00:04:03.589    CC module/event/subsystems/iscsi/iscsi.o
00:04:03.847    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:04:03.847    LIB libspdk_event_vhost_scsi.a
00:04:03.847    LIB libspdk_event_iscsi.a
00:04:03.847    SO libspdk_event_vhost_scsi.so.2.0
00:04:03.847    SO libspdk_event_iscsi.so.5.0
00:04:04.106    SYMLINK libspdk_event_iscsi.so
00:04:04.106    SYMLINK libspdk_event_vhost_scsi.so
00:04:04.106    SO libspdk.so.5.0
00:04:04.106    SYMLINK libspdk.so
00:04:04.372    CC app/trace_record/trace_record.o
00:04:04.372    CC app/spdk_lspci/spdk_lspci.o
00:04:04.372    CXX app/trace/trace.o
00:04:04.372    CC app/spdk_nvme_perf/perf.o
00:04:04.372    CC app/spdk_top/spdk_top.o
00:04:04.372    CC app/spdk_nvme_identify/identify.o
00:04:04.372    CC test/rpc_client/rpc_client_test.o
00:04:04.372    CC app/spdk_nvme_discover/discovery_aer.o
00:04:04.372    TEST_HEADER include/spdk/accel.h
00:04:04.372    TEST_HEADER include/spdk/accel_module.h
00:04:04.372    TEST_HEADER include/spdk/assert.h
00:04:04.372    TEST_HEADER include/spdk/base64.h
00:04:04.372    TEST_HEADER include/spdk/barrier.h
00:04:04.372    TEST_HEADER include/spdk/bdev.h
00:04:04.372    TEST_HEADER include/spdk/bdev_module.h
00:04:04.372    TEST_HEADER include/spdk/bdev_zone.h
00:04:04.372    TEST_HEADER include/spdk/bit_array.h
00:04:04.372    TEST_HEADER include/spdk/bit_pool.h
00:04:04.372    TEST_HEADER include/spdk/blob_bdev.h
00:04:04.372    TEST_HEADER include/spdk/blobfs_bdev.h
00:04:04.372    TEST_HEADER include/spdk/blobfs.h
00:04:04.372    TEST_HEADER include/spdk/blob.h
00:04:04.372    TEST_HEADER include/spdk/conf.h
00:04:04.372    TEST_HEADER include/spdk/config.h
00:04:04.372    TEST_HEADER include/spdk/cpuset.h
00:04:04.372    TEST_HEADER include/spdk/crc16.h
00:04:04.372    CC app/spdk_dd/spdk_dd.o
00:04:04.372    TEST_HEADER include/spdk/crc32.h
00:04:04.372    TEST_HEADER include/spdk/crc64.h
00:04:04.372    TEST_HEADER include/spdk/dif.h
00:04:04.372    CC examples/interrupt_tgt/interrupt_tgt.o
00:04:04.642    TEST_HEADER include/spdk/dma.h
00:04:04.642    CC app/iscsi_tgt/iscsi_tgt.o
00:04:04.642    TEST_HEADER include/spdk/endian.h
00:04:04.642    CC app/vhost/vhost.o
00:04:04.642    TEST_HEADER include/spdk/env_dpdk.h
00:04:04.642    CC app/nvmf_tgt/nvmf_main.o
00:04:04.642    TEST_HEADER include/spdk/env.h
00:04:04.642    TEST_HEADER include/spdk/event.h
00:04:04.642    TEST_HEADER include/spdk/fd_group.h
00:04:04.642    TEST_HEADER include/spdk/fd.h
00:04:04.642    TEST_HEADER include/spdk/file.h
00:04:04.642    TEST_HEADER include/spdk/ftl.h
00:04:04.642    CC examples/util/zipf/zipf.o
00:04:04.642    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:04:04.642    CC examples/nvme/hello_world/hello_world.o
00:04:04.642    TEST_HEADER include/spdk/gpt_spec.h
00:04:04.642    CC test/app/jsoncat/jsoncat.o
00:04:04.642    CC examples/vmd/lsvmd/lsvmd.o
00:04:04.642    TEST_HEADER include/spdk/hexlify.h
00:04:04.642    CC test/event/reactor/reactor.o
00:04:04.642    CC examples/nvme/hotplug/hotplug.o
00:04:04.642    CC test/thread/poller_perf/poller_perf.o
00:04:04.642    CC test/env/memory/memory_ut.o
00:04:04.642    CC examples/nvme/cmb_copy/cmb_copy.o
00:04:04.642    CC test/nvme/reset/reset.o
00:04:04.642    CC app/fio/nvme/fio_plugin.o
00:04:04.642    TEST_HEADER include/spdk/histogram_data.h
00:04:04.642    CC examples/nvme/arbitration/arbitration.o
00:04:04.642    CC examples/ioat/verify/verify.o
00:04:04.642    CC examples/accel/perf/accel_perf.o
00:04:04.642    CC test/event/reactor_perf/reactor_perf.o
00:04:04.642    CC examples/nvme/abort/abort.o
00:04:04.642    CC examples/idxd/perf/perf.o
00:04:04.642    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:04:04.642    CC examples/nvme/nvme_manage/nvme_manage.o
00:04:04.642    CC examples/vmd/led/led.o
00:04:04.642    CC examples/ioat/perf/perf.o
00:04:04.642    TEST_HEADER include/spdk/idxd.h
00:04:04.642    CC examples/sock/hello_world/hello_sock.o
00:04:04.642    TEST_HEADER include/spdk/idxd_spec.h
00:04:04.642    CC test/nvme/aer/aer.o
00:04:04.642    TEST_HEADER include/spdk/init.h
00:04:04.642    CC test/event/event_perf/event_perf.o
00:04:04.642    CC test/app/stub/stub.o
00:04:04.642    TEST_HEADER include/spdk/ioat.h
00:04:04.642    CC test/nvme/reserve/reserve.o
00:04:04.642    TEST_HEADER include/spdk/ioat_spec.h
00:04:04.642    CC app/spdk_tgt/spdk_tgt.o
00:04:04.642    CC test/nvme/sgl/sgl.o
00:04:04.642    CC test/app/histogram_perf/histogram_perf.o
00:04:04.642    CC test/nvme/overhead/overhead.o
00:04:04.642    CC test/nvme/connect_stress/connect_stress.o
00:04:04.642    CC test/nvme/err_injection/err_injection.o
00:04:04.642    TEST_HEADER include/spdk/iscsi_spec.h
00:04:04.642    CC examples/nvme/reconnect/reconnect.o
00:04:04.642    CC test/env/pci/pci_ut.o
00:04:04.642    CC test/nvme/e2edp/nvme_dp.o
00:04:04.642    TEST_HEADER include/spdk/json.h
00:04:04.642    CC test/nvme/startup/startup.o
00:04:04.642    CC test/nvme/simple_copy/simple_copy.o
00:04:04.642    CC test/env/vtophys/vtophys.o
00:04:04.642    TEST_HEADER include/spdk/jsonrpc.h
00:04:04.642    CC test/nvme/boot_partition/boot_partition.o
00:04:04.642    CC test/nvme/compliance/nvme_compliance.o
00:04:04.642    TEST_HEADER include/spdk/likely.h
00:04:04.642    TEST_HEADER include/spdk/log.h
00:04:04.642    CC test/event/app_repeat/app_repeat.o
00:04:04.642    CC examples/blob/cli/blobcli.o
00:04:04.642    TEST_HEADER include/spdk/lvol.h
00:04:04.642    TEST_HEADER include/spdk/memory.h
00:04:04.642    TEST_HEADER include/spdk/mmio.h
00:04:04.642    CC test/blobfs/mkfs/mkfs.o
00:04:04.642    CC examples/bdev/hello_world/hello_bdev.o
00:04:04.642    TEST_HEADER include/spdk/nbd.h
00:04:04.642    CC app/fio/bdev/fio_plugin.o
00:04:04.642    CC examples/nvmf/nvmf/nvmf.o
00:04:04.642    TEST_HEADER include/spdk/notify.h
00:04:04.642    CC examples/blob/hello_world/hello_blob.o
00:04:04.642    TEST_HEADER include/spdk/nvme.h
00:04:04.642    CC examples/bdev/bdevperf/bdevperf.o
00:04:04.642    CC test/dma/test_dma/test_dma.o
00:04:04.642    CC test/accel/dif/dif.o
00:04:04.642    CC examples/thread/thread/thread_ex.o
00:04:04.642    TEST_HEADER include/spdk/nvme_intel.h
00:04:04.642    CC test/bdev/bdevio/bdevio.o
00:04:04.642    TEST_HEADER include/spdk/nvme_ocssd.h
00:04:04.642    CC test/app/bdev_svc/bdev_svc.o
00:04:04.642    CC test/event/scheduler/scheduler.o
00:04:04.642    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:04:04.642    TEST_HEADER include/spdk/nvme_spec.h
00:04:04.642    TEST_HEADER include/spdk/nvme_zns.h
00:04:04.642    TEST_HEADER include/spdk/nvmf_cmd.h
00:04:04.642    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:04:04.642    CC test/env/mem_callbacks/mem_callbacks.o
00:04:04.642    TEST_HEADER include/spdk/nvmf.h
00:04:04.642    TEST_HEADER include/spdk/nvmf_spec.h
00:04:04.642    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:04:04.642    TEST_HEADER include/spdk/nvmf_transport.h
00:04:04.642    TEST_HEADER include/spdk/opal.h
00:04:04.642    CC test/lvol/esnap/esnap.o
00:04:04.642    TEST_HEADER include/spdk/opal_spec.h
00:04:04.642    TEST_HEADER include/spdk/pci_ids.h
00:04:04.642    TEST_HEADER include/spdk/pipe.h
00:04:04.642    TEST_HEADER include/spdk/queue.h
00:04:04.642    TEST_HEADER include/spdk/reduce.h
00:04:04.642    TEST_HEADER include/spdk/rpc.h
00:04:04.903    LINK spdk_lspci
00:04:04.903    TEST_HEADER include/spdk/scheduler.h
00:04:04.903    TEST_HEADER include/spdk/scsi.h
00:04:04.903    LINK rpc_client_test
00:04:04.903    TEST_HEADER include/spdk/scsi_spec.h
00:04:04.903    TEST_HEADER include/spdk/sock.h
00:04:04.903    TEST_HEADER include/spdk/stdinc.h
00:04:04.903    TEST_HEADER include/spdk/string.h
00:04:04.903    TEST_HEADER include/spdk/thread.h
00:04:04.903    TEST_HEADER include/spdk/trace.h
00:04:04.903    TEST_HEADER include/spdk/trace_parser.h
00:04:04.903    TEST_HEADER include/spdk/tree.h
00:04:04.903    TEST_HEADER include/spdk/ublk.h
00:04:04.903    TEST_HEADER include/spdk/util.h
00:04:04.903    TEST_HEADER include/spdk/uuid.h
00:04:04.903    LINK spdk_nvme_discover
00:04:04.903    TEST_HEADER include/spdk/version.h
00:04:04.903    TEST_HEADER include/spdk/vfio_user_pci.h
00:04:04.903    TEST_HEADER include/spdk/vfio_user_spec.h
00:04:04.903    TEST_HEADER include/spdk/vhost.h
00:04:04.903    TEST_HEADER include/spdk/vmd.h
00:04:04.903    TEST_HEADER include/spdk/xor.h
00:04:04.903    TEST_HEADER include/spdk/zipf.h
00:04:04.903    LINK lsvmd
00:04:04.903    LINK reactor
00:04:04.903    CXX test/cpp_headers/accel.o
00:04:04.903    LINK jsoncat
00:04:04.903    LINK reactor_perf
00:04:04.903    LINK interrupt_tgt
00:04:04.903    LINK spdk_trace_record
00:04:04.903    LINK zipf
00:04:04.903    LINK nvmf_tgt
00:04:04.903    LINK histogram_perf
00:04:04.903    LINK pmr_persistence
00:04:04.903    LINK env_dpdk_post_init
00:04:04.903    LINK led
00:04:04.903    LINK poller_perf
00:04:04.903    LINK boot_partition
00:04:04.903    LINK event_perf
00:04:04.903    LINK vtophys
00:04:04.903    LINK vhost
00:04:04.903    LINK iscsi_tgt
00:04:04.903    LINK startup
00:04:04.903    LINK app_repeat
00:04:04.903    LINK connect_stress
00:04:04.903    LINK err_injection
00:04:04.903    LINK stub
00:04:04.903    LINK cmb_copy
00:04:04.903    LINK hello_world
00:04:04.903    LINK verify
00:04:05.165    LINK bdev_svc
00:04:05.165    LINK hotplug
00:04:05.165    LINK ioat_perf
00:04:05.165    LINK reserve
00:04:05.165    LINK simple_copy
00:04:05.165    LINK reset
00:04:05.165    LINK hello_blob
00:04:05.165    LINK hello_sock
00:04:05.165    LINK spdk_tgt
00:04:05.165    LINK mkfs
00:04:05.165    LINK scheduler
00:04:05.165    LINK hello_bdev
00:04:05.165    CXX test/cpp_headers/accel_module.o
00:04:05.165    LINK spdk_dd
00:04:05.165    CXX test/cpp_headers/assert.o
00:04:05.165    LINK nvme_dp
00:04:05.165    LINK overhead
00:04:05.165    LINK nvme_compliance
00:04:05.165    LINK sgl
00:04:05.165    LINK aer
00:04:05.165    LINK thread
00:04:05.165    LINK spdk_trace
00:04:05.165    LINK arbitration
00:04:05.165    CXX test/cpp_headers/barrier.o
00:04:05.165    CXX test/cpp_headers/base64.o
00:04:05.165    CXX test/cpp_headers/bdev.o
00:04:05.165    LINK reconnect
00:04:05.165    CXX test/cpp_headers/bdev_module.o
00:04:05.165    LINK abort
00:04:05.165    CXX test/cpp_headers/bdev_zone.o
00:04:05.433    LINK nvmf
00:04:05.433    CC test/nvme/fused_ordering/fused_ordering.o
00:04:05.433    LINK idxd_perf
00:04:05.433    CC test/nvme/doorbell_aers/doorbell_aers.o
00:04:05.433    CXX test/cpp_headers/bit_array.o
00:04:05.433    CXX test/cpp_headers/bit_pool.o
00:04:05.433    CC test/nvme/fdp/fdp.o
00:04:05.433    CXX test/cpp_headers/blob_bdev.o
00:04:05.433    CXX test/cpp_headers/blobfs_bdev.o
00:04:05.433    CXX test/cpp_headers/blobfs.o
00:04:05.433    CXX test/cpp_headers/blob.o
00:04:05.433    CXX test/cpp_headers/conf.o
00:04:05.433    CC test/nvme/cuse/cuse.o
00:04:05.433    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:04:05.433    CXX test/cpp_headers/config.o
00:04:05.433    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:04:05.433    CXX test/cpp_headers/cpuset.o
00:04:05.433    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:04:05.433    CXX test/cpp_headers/crc32.o
00:04:05.433    CXX test/cpp_headers/crc16.o
00:04:05.433    CXX test/cpp_headers/crc64.o
00:04:05.433    CXX test/cpp_headers/dif.o
00:04:05.433    LINK dif
00:04:05.433    CXX test/cpp_headers/dma.o
00:04:05.433    LINK pci_ut
00:04:05.433    LINK accel_perf
00:04:05.433    CXX test/cpp_headers/endian.o
00:04:05.433    CXX test/cpp_headers/env_dpdk.o
00:04:05.433    LINK test_dma
00:04:05.433    CXX test/cpp_headers/env.o
00:04:05.433    CXX test/cpp_headers/event.o
00:04:05.433    CXX test/cpp_headers/fd_group.o
00:04:05.433    CXX test/cpp_headers/fd.o
00:04:05.433    LINK bdevio
00:04:05.434    CXX test/cpp_headers/file.o
00:04:05.434    CXX test/cpp_headers/ftl.o
00:04:05.434    CXX test/cpp_headers/gpt_spec.o
00:04:05.434    CXX test/cpp_headers/hexlify.o
00:04:05.434    LINK nvme_manage
00:04:05.434    CXX test/cpp_headers/histogram_data.o
00:04:05.434    CXX test/cpp_headers/idxd.o
00:04:05.434    CXX test/cpp_headers/idxd_spec.o
00:04:05.434    LINK nvme_fuzz
00:04:05.434    CXX test/cpp_headers/init.o
00:04:05.434    CXX test/cpp_headers/ioat.o
00:04:05.693    LINK spdk_bdev
00:04:05.693    CXX test/cpp_headers/ioat_spec.o
00:04:05.693    CXX test/cpp_headers/iscsi_spec.o
00:04:05.693    CXX test/cpp_headers/json.o
00:04:05.693    LINK spdk_nvme
00:04:05.693    CXX test/cpp_headers/jsonrpc.o
00:04:05.693    LINK blobcli
00:04:05.693    CXX test/cpp_headers/likely.o
00:04:05.693    CXX test/cpp_headers/log.o
00:04:05.693    CXX test/cpp_headers/memory.o
00:04:05.693    CXX test/cpp_headers/lvol.o
00:04:05.693    LINK spdk_nvme_perf
00:04:05.693    LINK doorbell_aers
00:04:05.693    CXX test/cpp_headers/mmio.o
00:04:05.693    LINK fused_ordering
00:04:05.693    CXX test/cpp_headers/nbd.o
00:04:05.693    CXX test/cpp_headers/notify.o
00:04:05.693    CXX test/cpp_headers/nvme.o
00:04:05.693    CXX test/cpp_headers/nvme_intel.o
00:04:05.693    CXX test/cpp_headers/nvme_ocssd.o
00:04:05.693    LINK spdk_top
00:04:05.693    CXX test/cpp_headers/nvme_ocssd_spec.o
00:04:05.693    CXX test/cpp_headers/nvme_spec.o
00:04:05.959    CXX test/cpp_headers/nvme_zns.o
00:04:05.959    CXX test/cpp_headers/nvmf_cmd.o
00:04:05.959    CXX test/cpp_headers/nvmf_fc_spec.o
00:04:05.959    CXX test/cpp_headers/nvmf.o
00:04:05.959    CXX test/cpp_headers/nvmf_spec.o
00:04:05.959    CXX test/cpp_headers/nvmf_transport.o
00:04:05.959    LINK mem_callbacks
00:04:05.959    CXX test/cpp_headers/opal.o
00:04:05.959    CXX test/cpp_headers/opal_spec.o
00:04:05.959    CXX test/cpp_headers/pci_ids.o
00:04:05.959    CXX test/cpp_headers/pipe.o
00:04:05.959    CXX test/cpp_headers/queue.o
00:04:05.959    CXX test/cpp_headers/reduce.o
00:04:05.959    CXX test/cpp_headers/rpc.o
00:04:05.959    LINK spdk_nvme_identify
00:04:05.959    CXX test/cpp_headers/scheduler.o
00:04:05.959    CXX test/cpp_headers/scsi.o
00:04:05.959    CXX test/cpp_headers/scsi_spec.o
00:04:05.959    CXX test/cpp_headers/sock.o
00:04:05.959    CXX test/cpp_headers/string.o
00:04:05.959    CXX test/cpp_headers/stdinc.o
00:04:05.959    CXX test/cpp_headers/thread.o
00:04:05.959    CXX test/cpp_headers/trace.o
00:04:05.959    CXX test/cpp_headers/trace_parser.o
00:04:05.959    CXX test/cpp_headers/tree.o
00:04:05.959    LINK fdp
00:04:05.959    CXX test/cpp_headers/ublk.o
00:04:05.959    CXX test/cpp_headers/util.o
00:04:05.959    CXX test/cpp_headers/uuid.o
00:04:05.959    CXX test/cpp_headers/version.o
00:04:05.959    CXX test/cpp_headers/vfio_user_pci.o
00:04:05.959    CXX test/cpp_headers/vfio_user_spec.o
00:04:05.959    CXX test/cpp_headers/vhost.o
00:04:05.959    CXX test/cpp_headers/xor.o
00:04:05.959    CXX test/cpp_headers/vmd.o
00:04:05.959    CXX test/cpp_headers/zipf.o
00:04:06.220    LINK bdevperf
00:04:06.220    LINK vhost_fuzz
00:04:06.220    LINK memory_ut
00:04:06.787    LINK cuse
00:04:07.726    LINK iscsi_fuzz
00:04:10.264    LINK esnap
00:04:10.523  
00:04:10.523  real	0m41.131s
00:04:10.523  user	6m35.225s
00:04:10.523  sys	2m27.582s
00:04:10.523   00:34:59	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:04:10.523   00:34:59	-- common/autotest_common.sh@10 -- $ set +x
00:04:10.523  ************************************
00:04:10.523  END TEST make
00:04:10.523  ************************************
00:04:10.782    00:34:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:10.782     00:34:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:10.782     00:34:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:10.782    00:35:00	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:10.782    00:35:00	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:10.782    00:35:00	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:10.782    00:35:00	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:10.782    00:35:00	-- scripts/common.sh@335 -- # IFS=.-:
00:04:10.782    00:35:00	-- scripts/common.sh@335 -- # read -ra ver1
00:04:10.782    00:35:00	-- scripts/common.sh@336 -- # IFS=.-:
00:04:10.782    00:35:00	-- scripts/common.sh@336 -- # read -ra ver2
00:04:10.782    00:35:00	-- scripts/common.sh@337 -- # local 'op=<'
00:04:10.782    00:35:00	-- scripts/common.sh@339 -- # ver1_l=2
00:04:10.782    00:35:00	-- scripts/common.sh@340 -- # ver2_l=1
00:04:10.782    00:35:00	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:10.782    00:35:00	-- scripts/common.sh@343 -- # case "$op" in
00:04:10.782    00:35:00	-- scripts/common.sh@344 -- # : 1
00:04:10.782    00:35:00	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:10.782    00:35:00	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:10.782     00:35:00	-- scripts/common.sh@364 -- # decimal 1
00:04:10.782     00:35:00	-- scripts/common.sh@352 -- # local d=1
00:04:10.782     00:35:00	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:10.782     00:35:00	-- scripts/common.sh@354 -- # echo 1
00:04:10.782    00:35:00	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:10.782     00:35:00	-- scripts/common.sh@365 -- # decimal 2
00:04:10.782     00:35:00	-- scripts/common.sh@352 -- # local d=2
00:04:10.782     00:35:00	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:10.782     00:35:00	-- scripts/common.sh@354 -- # echo 2
00:04:10.782    00:35:00	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:10.782    00:35:00	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:10.782    00:35:00	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:10.782    00:35:00	-- scripts/common.sh@367 -- # return 0
00:04:10.782    00:35:00	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:10.782    00:35:00	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:10.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.782  		--rc genhtml_branch_coverage=1
00:04:10.782  		--rc genhtml_function_coverage=1
00:04:10.782  		--rc genhtml_legend=1
00:04:10.782  		--rc geninfo_all_blocks=1
00:04:10.782  		--rc geninfo_unexecuted_blocks=1
00:04:10.782  		
00:04:10.782  		'
00:04:10.782    00:35:00	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:10.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.782  		--rc genhtml_branch_coverage=1
00:04:10.782  		--rc genhtml_function_coverage=1
00:04:10.782  		--rc genhtml_legend=1
00:04:10.782  		--rc geninfo_all_blocks=1
00:04:10.782  		--rc geninfo_unexecuted_blocks=1
00:04:10.782  		
00:04:10.782  		'
00:04:10.782    00:35:00	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:10.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.782  		--rc genhtml_branch_coverage=1
00:04:10.782  		--rc genhtml_function_coverage=1
00:04:10.782  		--rc genhtml_legend=1
00:04:10.782  		--rc geninfo_all_blocks=1
00:04:10.782  		--rc geninfo_unexecuted_blocks=1
00:04:10.782  		
00:04:10.782  		'
00:04:10.782    00:35:00	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:10.782  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:10.782  		--rc genhtml_branch_coverage=1
00:04:10.782  		--rc genhtml_function_coverage=1
00:04:10.782  		--rc genhtml_legend=1
00:04:10.782  		--rc geninfo_all_blocks=1
00:04:10.782  		--rc geninfo_unexecuted_blocks=1
00:04:10.782  		
00:04:10.782  		'
00:04:10.782   00:35:00	-- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh
00:04:10.782     00:35:00	-- nvmf/common.sh@7 -- # uname -s
00:04:10.782    00:35:00	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:10.782    00:35:00	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:10.782    00:35:00	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:10.782    00:35:00	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:10.782    00:35:00	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:10.782    00:35:00	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:10.782    00:35:00	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:10.782    00:35:00	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:10.782    00:35:00	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:10.782     00:35:00	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:10.782    00:35:00	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e
00:04:10.782    00:35:00	-- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e
00:04:10.782    00:35:00	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:10.782    00:35:00	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:10.782    00:35:00	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:10.782    00:35:00	-- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:04:11.042     00:35:00	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:11.042     00:35:00	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:11.042     00:35:00	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:11.042      00:35:00	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:11.042      00:35:00	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:11.042      00:35:00	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:11.042      00:35:00	-- paths/export.sh@5 -- # export PATH
00:04:11.042      00:35:00	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:11.042    00:35:00	-- nvmf/common.sh@46 -- # : 0
00:04:11.042    00:35:00	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:04:11.042    00:35:00	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:04:11.042    00:35:00	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:04:11.042    00:35:00	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:11.042    00:35:00	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:11.042    00:35:00	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:04:11.042    00:35:00	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:04:11.042    00:35:00	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:04:11.042   00:35:00	-- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:04:11.042    00:35:00	-- spdk/autotest.sh@32 -- # uname -s
00:04:11.042   00:35:00	-- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:04:11.042   00:35:00	-- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:04:11.042   00:35:00	-- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps
00:04:11.042   00:35:00	-- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:04:11.042   00:35:00	-- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps
00:04:11.042   00:35:00	-- spdk/autotest.sh@44 -- # modprobe nbd
00:04:11.042    00:35:00	-- spdk/autotest.sh@46 -- # type -P udevadm
00:04:11.042   00:35:00	-- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:04:11.042   00:35:00	-- spdk/autotest.sh@48 -- # udevadm_pid=895123
00:04:11.042   00:35:00	-- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power
00:04:11.042   00:35:00	-- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:04:11.042   00:35:00	-- spdk/autotest.sh@54 -- # echo 895125
00:04:11.042   00:35:00	-- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power
00:04:11.042   00:35:00	-- spdk/autotest.sh@56 -- # echo 895126
00:04:11.042   00:35:00	-- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power
00:04:11.042   00:35:00	-- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]]
00:04:11.042   00:35:00	-- spdk/autotest.sh@60 -- # echo 895127
00:04:11.042   00:35:00	-- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l
00:04:11.042   00:35:00	-- spdk/autotest.sh@62 -- # echo 895128
00:04:11.042   00:35:00	-- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l
00:04:11.042   00:35:00	-- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:04:11.042   00:35:00	-- spdk/autotest.sh@68 -- # timing_enter autotest
00:04:11.042   00:35:00	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:11.042   00:35:00	-- common/autotest_common.sh@10 -- # set +x
00:04:11.042   00:35:00	-- spdk/autotest.sh@70 -- # create_test_list
00:04:11.042   00:35:00	-- common/autotest_common.sh@746 -- # xtrace_disable
00:04:11.042   00:35:00	-- common/autotest_common.sh@10 -- # set +x
00:04:11.042  Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log
00:04:11.042  Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log
00:04:11.042     00:35:00	-- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh
00:04:11.042    00:35:00	-- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk
00:04:11.042   00:35:00	-- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:04:11.042   00:35:00	-- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output
00:04:11.042   00:35:00	-- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:04:11.042   00:35:00	-- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod
00:04:11.042    00:35:00	-- common/autotest_common.sh@1450 -- # uname
00:04:11.042   00:35:00	-- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']'
00:04:11.042   00:35:00	-- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf
00:04:11.042    00:35:00	-- common/autotest_common.sh@1470 -- # uname
00:04:11.042   00:35:00	-- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]]
00:04:11.042   00:35:00	-- spdk/autotest.sh@79 -- # [[ y == y ]]
00:04:11.042   00:35:00	-- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:04:11.042  lcov: LCOV version 1.15
00:04:11.042   00:35:00	-- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvme-phy-autotest/spdk -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info
00:04:14.332  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:04:14.332  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:04:14.332  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:04:14.332  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:04:14.591  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:04:14.591  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:04:46.681   00:35:32	-- spdk/autotest.sh@87 -- # timing_enter pre_cleanup
00:04:46.681   00:35:32	-- common/autotest_common.sh@722 -- # xtrace_disable
00:04:46.681   00:35:32	-- common/autotest_common.sh@10 -- # set +x
00:04:46.681   00:35:32	-- spdk/autotest.sh@89 -- # rm -f
00:04:46.681   00:35:32	-- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:04:46.681  0000:5e:00.0 (8086 0a54): Already using the nvme driver
00:04:46.681  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:04:46.681  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:04:46.941  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:04:46.941   00:35:36	-- spdk/autotest.sh@94 -- # get_zoned_devs
00:04:46.941   00:35:36	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:46.941   00:35:36	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:46.941   00:35:36	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:46.941   00:35:36	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:46.941   00:35:36	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:46.941   00:35:36	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:46.941   00:35:36	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:46.941   00:35:36	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:46.941   00:35:36	-- spdk/autotest.sh@96 -- # (( 0 > 0 ))
00:04:46.941    00:35:36	-- spdk/autotest.sh@108 -- # ls /dev/nvme0n1
00:04:46.941    00:35:36	-- spdk/autotest.sh@108 -- # grep -v p
00:04:46.941   00:35:36	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:04:46.941   00:35:36	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:04:46.941   00:35:36	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1
00:04:46.941   00:35:36	-- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt
00:04:46.941   00:35:36	-- scripts/common.sh@389 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:04:47.200  No valid GPT data, bailing
00:04:47.200    00:35:36	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:47.200   00:35:36	-- scripts/common.sh@393 -- # pt=
00:04:47.200   00:35:36	-- scripts/common.sh@394 -- # return 1
00:04:47.200   00:35:36	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:04:47.200  1+0 records in
00:04:47.200  1+0 records out
00:04:47.200  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00275977 s, 380 MB/s
00:04:47.200   00:35:36	-- spdk/autotest.sh@116 -- # sync
00:04:47.200   00:35:36	-- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes
00:04:47.200   00:35:36	-- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:04:47.200    00:35:36	-- common/autotest_common.sh@22 -- # reap_spdk_processes
00:04:52.477    00:35:41	-- spdk/autotest.sh@122 -- # uname -s
00:04:52.477   00:35:41	-- spdk/autotest.sh@122 -- # '[' Linux = Linux ']'
00:04:52.477   00:35:41	-- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/test-setup.sh
00:04:52.477   00:35:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:52.477   00:35:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:52.477   00:35:41	-- common/autotest_common.sh@10 -- # set +x
00:04:52.477  ************************************
00:04:52.477  START TEST setup.sh
00:04:52.477  ************************************
00:04:52.477   00:35:41	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/test-setup.sh
00:04:52.477  * Looking for test storage...
00:04:52.477  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:04:52.477     00:35:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:52.477      00:35:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:52.477      00:35:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:52.477     00:35:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:52.477     00:35:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:52.477     00:35:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:52.477     00:35:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:52.477     00:35:41	-- scripts/common.sh@335 -- # IFS=.-:
00:04:52.477     00:35:41	-- scripts/common.sh@335 -- # read -ra ver1
00:04:52.477     00:35:41	-- scripts/common.sh@336 -- # IFS=.-:
00:04:52.477     00:35:41	-- scripts/common.sh@336 -- # read -ra ver2
00:04:52.477     00:35:41	-- scripts/common.sh@337 -- # local 'op=<'
00:04:52.477     00:35:41	-- scripts/common.sh@339 -- # ver1_l=2
00:04:52.477     00:35:41	-- scripts/common.sh@340 -- # ver2_l=1
00:04:52.477     00:35:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:52.477     00:35:41	-- scripts/common.sh@343 -- # case "$op" in
00:04:52.477     00:35:41	-- scripts/common.sh@344 -- # : 1
00:04:52.477     00:35:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:52.477     00:35:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:52.477      00:35:41	-- scripts/common.sh@364 -- # decimal 1
00:04:52.477      00:35:41	-- scripts/common.sh@352 -- # local d=1
00:04:52.477      00:35:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:52.477      00:35:41	-- scripts/common.sh@354 -- # echo 1
00:04:52.477     00:35:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:52.477      00:35:41	-- scripts/common.sh@365 -- # decimal 2
00:04:52.477      00:35:41	-- scripts/common.sh@352 -- # local d=2
00:04:52.477      00:35:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:52.477      00:35:41	-- scripts/common.sh@354 -- # echo 2
00:04:52.477     00:35:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:52.477     00:35:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:52.477     00:35:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:52.477     00:35:41	-- scripts/common.sh@367 -- # return 0
00:04:52.477     00:35:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:52.477     00:35:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:52.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.477  		--rc genhtml_branch_coverage=1
00:04:52.477  		--rc genhtml_function_coverage=1
00:04:52.477  		--rc genhtml_legend=1
00:04:52.477  		--rc geninfo_all_blocks=1
00:04:52.477  		--rc geninfo_unexecuted_blocks=1
00:04:52.477  		
00:04:52.477  		'
00:04:52.477     00:35:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:52.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.477  		--rc genhtml_branch_coverage=1
00:04:52.477  		--rc genhtml_function_coverage=1
00:04:52.477  		--rc genhtml_legend=1
00:04:52.477  		--rc geninfo_all_blocks=1
00:04:52.477  		--rc geninfo_unexecuted_blocks=1
00:04:52.477  		
00:04:52.477  		'
00:04:52.477     00:35:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:52.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.477  		--rc genhtml_branch_coverage=1
00:04:52.477  		--rc genhtml_function_coverage=1
00:04:52.477  		--rc genhtml_legend=1
00:04:52.477  		--rc geninfo_all_blocks=1
00:04:52.477  		--rc geninfo_unexecuted_blocks=1
00:04:52.477  		
00:04:52.477  		'
00:04:52.477     00:35:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:52.477  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.477  		--rc genhtml_branch_coverage=1
00:04:52.477  		--rc genhtml_function_coverage=1
00:04:52.477  		--rc genhtml_legend=1
00:04:52.477  		--rc geninfo_all_blocks=1
00:04:52.477  		--rc geninfo_unexecuted_blocks=1
00:04:52.477  		
00:04:52.477  		'
00:04:52.477    00:35:41	-- setup/test-setup.sh@10 -- # uname -s
00:04:52.477   00:35:41	-- setup/test-setup.sh@10 -- # [[ Linux == Linux ]]
00:04:52.477   00:35:41	-- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/acl.sh
00:04:52.477   00:35:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:52.477   00:35:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:52.477   00:35:41	-- common/autotest_common.sh@10 -- # set +x
00:04:52.477  ************************************
00:04:52.477  START TEST acl
00:04:52.477  ************************************
00:04:52.477   00:35:41	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/acl.sh
00:04:52.477  * Looking for test storage...
00:04:52.477  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:04:52.477     00:35:41	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:52.477      00:35:41	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:52.477      00:35:41	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:52.477     00:35:41	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:52.477     00:35:41	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:52.477     00:35:41	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:52.477     00:35:41	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:52.477     00:35:41	-- scripts/common.sh@335 -- # IFS=.-:
00:04:52.477     00:35:41	-- scripts/common.sh@335 -- # read -ra ver1
00:04:52.477     00:35:41	-- scripts/common.sh@336 -- # IFS=.-:
00:04:52.477     00:35:41	-- scripts/common.sh@336 -- # read -ra ver2
00:04:52.477     00:35:41	-- scripts/common.sh@337 -- # local 'op=<'
00:04:52.477     00:35:41	-- scripts/common.sh@339 -- # ver1_l=2
00:04:52.477     00:35:41	-- scripts/common.sh@340 -- # ver2_l=1
00:04:52.477     00:35:41	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:52.478     00:35:41	-- scripts/common.sh@343 -- # case "$op" in
00:04:52.478     00:35:41	-- scripts/common.sh@344 -- # : 1
00:04:52.478     00:35:41	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:52.478     00:35:41	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:52.478      00:35:41	-- scripts/common.sh@364 -- # decimal 1
00:04:52.478      00:35:41	-- scripts/common.sh@352 -- # local d=1
00:04:52.478      00:35:41	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:52.478      00:35:41	-- scripts/common.sh@354 -- # echo 1
00:04:52.478     00:35:41	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:52.478      00:35:41	-- scripts/common.sh@365 -- # decimal 2
00:04:52.478      00:35:41	-- scripts/common.sh@352 -- # local d=2
00:04:52.478      00:35:41	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:52.478      00:35:41	-- scripts/common.sh@354 -- # echo 2
00:04:52.478     00:35:41	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:52.478     00:35:41	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:52.478     00:35:41	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:52.478     00:35:41	-- scripts/common.sh@367 -- # return 0
00:04:52.478     00:35:41	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:52.478     00:35:41	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:52.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.478  		--rc genhtml_branch_coverage=1
00:04:52.478  		--rc genhtml_function_coverage=1
00:04:52.478  		--rc genhtml_legend=1
00:04:52.478  		--rc geninfo_all_blocks=1
00:04:52.478  		--rc geninfo_unexecuted_blocks=1
00:04:52.478  		
00:04:52.478  		'
00:04:52.478     00:35:41	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:52.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.478  		--rc genhtml_branch_coverage=1
00:04:52.478  		--rc genhtml_function_coverage=1
00:04:52.478  		--rc genhtml_legend=1
00:04:52.478  		--rc geninfo_all_blocks=1
00:04:52.478  		--rc geninfo_unexecuted_blocks=1
00:04:52.478  		
00:04:52.478  		'
00:04:52.478     00:35:41	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:52.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.478  		--rc genhtml_branch_coverage=1
00:04:52.478  		--rc genhtml_function_coverage=1
00:04:52.478  		--rc genhtml_legend=1
00:04:52.478  		--rc geninfo_all_blocks=1
00:04:52.478  		--rc geninfo_unexecuted_blocks=1
00:04:52.478  		
00:04:52.478  		'
00:04:52.478     00:35:41	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:52.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:52.478  		--rc genhtml_branch_coverage=1
00:04:52.478  		--rc genhtml_function_coverage=1
00:04:52.478  		--rc genhtml_legend=1
00:04:52.478  		--rc geninfo_all_blocks=1
00:04:52.478  		--rc geninfo_unexecuted_blocks=1
00:04:52.478  		
00:04:52.478  		'
00:04:52.478   00:35:41	-- setup/acl.sh@10 -- # get_zoned_devs
00:04:52.478   00:35:41	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:52.478   00:35:41	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:52.478   00:35:41	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:52.478   00:35:41	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:52.478   00:35:41	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:52.478   00:35:41	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:52.478   00:35:41	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:52.478   00:35:41	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:52.478   00:35:41	-- setup/acl.sh@12 -- # devs=()
00:04:52.478   00:35:41	-- setup/acl.sh@12 -- # declare -a devs
00:04:52.478   00:35:41	-- setup/acl.sh@13 -- # drivers=()
00:04:52.478   00:35:41	-- setup/acl.sh@13 -- # declare -A drivers
00:04:52.478   00:35:41	-- setup/acl.sh@51 -- # setup reset
00:04:52.478   00:35:41	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:52.478   00:35:41	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:04:56.670   00:35:45	-- setup/acl.sh@52 -- # collect_setup_devs
00:04:56.670   00:35:45	-- setup/acl.sh@16 -- # local dev driver
00:04:56.670   00:35:45	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:56.670    00:35:45	-- setup/acl.sh@15 -- # setup output status
00:04:56.670    00:35:45	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:56.670    00:35:45	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status
00:04:59.206  Hugepages
00:04:59.206  node     hugesize     free /  total
00:04:59.206   00:35:47	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:04:59.206   00:35:47	-- setup/acl.sh@19 -- # continue
00:04:59.206   00:35:47	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206   00:35:47	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:04:59.206   00:35:47	-- setup/acl.sh@19 -- # continue
00:04:59.206   00:35:47	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206   00:35:47	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:04:59.206   00:35:47	-- setup/acl.sh@19 -- # continue
00:04:59.206   00:35:47	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206  
00:04:59.206  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:59.206   00:35:48	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:04:59.206   00:35:48	-- setup/acl.sh@19 -- # continue
00:04:59.206   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]]
00:04:59.206   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.206   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.206   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]]
00:04:59.206   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.206   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.206   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]]
00:04:59.206   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.206   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.206   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.206   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ nvme == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@22 -- # devs+=("$dev")
00:04:59.207   00:35:48	-- setup/acl.sh@22 -- # drivers["$dev"]=nvme
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:04:59.207   00:35:48	-- setup/acl.sh@20 -- # continue
00:04:59.207   00:35:48	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:04:59.207   00:35:48	-- setup/acl.sh@24 -- # (( 1 > 0 ))
00:04:59.207   00:35:48	-- setup/acl.sh@54 -- # run_test denied denied
00:04:59.207   00:35:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:59.207   00:35:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:59.207   00:35:48	-- common/autotest_common.sh@10 -- # set +x
00:04:59.207  ************************************
00:04:59.207  START TEST denied
00:04:59.207  ************************************
00:04:59.207   00:35:48	-- common/autotest_common.sh@1114 -- # denied
00:04:59.207   00:35:48	-- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0'
00:04:59.207   00:35:48	-- setup/acl.sh@38 -- # setup output config
00:04:59.207   00:35:48	-- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0'
00:04:59.207   00:35:48	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:59.207   00:35:48	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:02.502  0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0
00:05:02.502   00:35:51	-- setup/acl.sh@40 -- # verify 0000:5e:00.0
00:05:02.502   00:35:51	-- setup/acl.sh@28 -- # local dev driver
00:05:02.502   00:35:51	-- setup/acl.sh@30 -- # for dev in "$@"
00:05:02.502   00:35:51	-- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]]
00:05:02.502    00:35:51	-- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver
00:05:02.502   00:35:51	-- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme
00:05:02.502   00:35:51	-- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]]
00:05:02.502   00:35:51	-- setup/acl.sh@41 -- # setup reset
00:05:02.502   00:35:51	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:02.502   00:35:51	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:05:06.698  
00:05:06.698  real	0m7.669s
00:05:06.698  user	0m2.334s
00:05:06.698  sys	0m4.547s
00:05:06.698   00:35:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:06.698   00:35:55	-- common/autotest_common.sh@10 -- # set +x
00:05:06.698  ************************************
00:05:06.698  END TEST denied
00:05:06.698  ************************************
00:05:06.698   00:35:55	-- setup/acl.sh@55 -- # run_test allowed allowed
00:05:06.698   00:35:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:06.698   00:35:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:06.698   00:35:55	-- common/autotest_common.sh@10 -- # set +x
00:05:06.698  ************************************
00:05:06.698  START TEST allowed
00:05:06.698  ************************************
00:05:06.699   00:35:55	-- common/autotest_common.sh@1114 -- # allowed
00:05:06.699   00:35:55	-- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0
00:05:06.699   00:35:55	-- setup/acl.sh@45 -- # setup output config
00:05:06.699   00:35:55	-- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*'
00:05:06.699   00:35:55	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:06.699   00:35:55	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:13.363  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:05:13.363   00:36:02	-- setup/acl.sh@47 -- # verify
00:05:13.363   00:36:02	-- setup/acl.sh@28 -- # local dev driver
00:05:13.363   00:36:02	-- setup/acl.sh@48 -- # setup reset
00:05:13.363   00:36:02	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:13.363   00:36:02	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:05:17.570  
00:05:17.570  real	0m10.020s
00:05:17.570  user	0m2.240s
00:05:17.570  sys	0m4.561s
00:05:17.570   00:36:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:17.570   00:36:05	-- common/autotest_common.sh@10 -- # set +x
00:05:17.570  ************************************
00:05:17.570  END TEST allowed
00:05:17.570  ************************************
00:05:17.570  
00:05:17.570  real	0m24.617s
00:05:17.570  user	0m6.965s
00:05:17.570  sys	0m13.848s
00:05:17.570   00:36:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:17.570   00:36:06	-- common/autotest_common.sh@10 -- # set +x
00:05:17.570  ************************************
00:05:17.570  END TEST acl
00:05:17.570  ************************************
00:05:17.570   00:36:06	-- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/hugepages.sh
00:05:17.570   00:36:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:17.570   00:36:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:17.570   00:36:06	-- common/autotest_common.sh@10 -- # set +x
00:05:17.570  ************************************
00:05:17.570  START TEST hugepages
00:05:17.570  ************************************
00:05:17.570   00:36:06	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/hugepages.sh
00:05:17.570  * Looking for test storage...
00:05:17.570  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:05:17.570     00:36:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:17.570      00:36:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:17.570      00:36:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:17.570     00:36:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:17.570     00:36:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:17.570     00:36:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:17.570     00:36:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:17.570     00:36:06	-- scripts/common.sh@335 -- # IFS=.-:
00:05:17.570     00:36:06	-- scripts/common.sh@335 -- # read -ra ver1
00:05:17.570     00:36:06	-- scripts/common.sh@336 -- # IFS=.-:
00:05:17.570     00:36:06	-- scripts/common.sh@336 -- # read -ra ver2
00:05:17.570     00:36:06	-- scripts/common.sh@337 -- # local 'op=<'
00:05:17.570     00:36:06	-- scripts/common.sh@339 -- # ver1_l=2
00:05:17.570     00:36:06	-- scripts/common.sh@340 -- # ver2_l=1
00:05:17.570     00:36:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:17.570     00:36:06	-- scripts/common.sh@343 -- # case "$op" in
00:05:17.570     00:36:06	-- scripts/common.sh@344 -- # : 1
00:05:17.570     00:36:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:17.570     00:36:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:17.570      00:36:06	-- scripts/common.sh@364 -- # decimal 1
00:05:17.570      00:36:06	-- scripts/common.sh@352 -- # local d=1
00:05:17.570      00:36:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:17.570      00:36:06	-- scripts/common.sh@354 -- # echo 1
00:05:17.570     00:36:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:17.570      00:36:06	-- scripts/common.sh@365 -- # decimal 2
00:05:17.570      00:36:06	-- scripts/common.sh@352 -- # local d=2
00:05:17.571      00:36:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:17.571      00:36:06	-- scripts/common.sh@354 -- # echo 2
00:05:17.571     00:36:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:17.571     00:36:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:17.571     00:36:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:17.571     00:36:06	-- scripts/common.sh@367 -- # return 0
00:05:17.571     00:36:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:17.571     00:36:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:17.571  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.571  		--rc genhtml_branch_coverage=1
00:05:17.571  		--rc genhtml_function_coverage=1
00:05:17.571  		--rc genhtml_legend=1
00:05:17.571  		--rc geninfo_all_blocks=1
00:05:17.571  		--rc geninfo_unexecuted_blocks=1
00:05:17.571  		
00:05:17.571  		'
00:05:17.571     00:36:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:17.571  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.571  		--rc genhtml_branch_coverage=1
00:05:17.571  		--rc genhtml_function_coverage=1
00:05:17.571  		--rc genhtml_legend=1
00:05:17.571  		--rc geninfo_all_blocks=1
00:05:17.571  		--rc geninfo_unexecuted_blocks=1
00:05:17.571  		
00:05:17.571  		'
00:05:17.571     00:36:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:17.571  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.571  		--rc genhtml_branch_coverage=1
00:05:17.571  		--rc genhtml_function_coverage=1
00:05:17.571  		--rc genhtml_legend=1
00:05:17.571  		--rc geninfo_all_blocks=1
00:05:17.571  		--rc geninfo_unexecuted_blocks=1
00:05:17.571  		
00:05:17.571  		'
00:05:17.571     00:36:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:17.571  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:17.571  		--rc genhtml_branch_coverage=1
00:05:17.571  		--rc genhtml_function_coverage=1
00:05:17.571  		--rc genhtml_legend=1
00:05:17.571  		--rc geninfo_all_blocks=1
00:05:17.571  		--rc geninfo_unexecuted_blocks=1
00:05:17.571  		
00:05:17.571  		'
00:05:17.571   00:36:06	-- setup/hugepages.sh@10 -- # nodes_sys=()
00:05:17.571   00:36:06	-- setup/hugepages.sh@10 -- # declare -a nodes_sys
00:05:17.571   00:36:06	-- setup/hugepages.sh@12 -- # declare -i default_hugepages=0
00:05:17.571   00:36:06	-- setup/hugepages.sh@13 -- # declare -i no_nodes=0
00:05:17.571   00:36:06	-- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0
00:05:17.571    00:36:06	-- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize
00:05:17.571    00:36:06	-- setup/common.sh@17 -- # local get=Hugepagesize
00:05:17.571    00:36:06	-- setup/common.sh@18 -- # local node=
00:05:17.571    00:36:06	-- setup/common.sh@19 -- # local var val
00:05:17.571    00:36:06	-- setup/common.sh@20 -- # local mem_f mem
00:05:17.571    00:36:06	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:17.571    00:36:06	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:17.571    00:36:06	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:17.571    00:36:06	-- setup/common.sh@28 -- # mapfile -t mem
00:05:17.571    00:36:06	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571     00:36:06	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        74886168 kB' 'MemAvailable:   78409464 kB' 'Buffers:            8064 kB' 'Cached:         11150572 kB' 'SwapCached:            0 kB' 'Active:          7957416 kB' 'Inactive:        3690704 kB' 'Active(anon):    7569492 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493428 kB' 'Mapped:           150100 kB' 'Shmem:           7080008 kB' 'KReclaimable:     196096 kB' 'Slab:             628656 kB' 'SReclaimable:     196096 kB' 'SUnreclaim:       432560 kB' 'KernelStack:       16304 kB' 'PageTables:         7920 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52434148 kB' 'Committed_AS:    8766700 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199064 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    2048' 'HugePages_Free:     2048' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         4194304 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.571    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.571    00:36:06	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # continue
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # IFS=': '
00:05:17.572    00:36:06	-- setup/common.sh@31 -- # read -r var val _
00:05:17.572    00:36:06	-- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:05:17.572    00:36:06	-- setup/common.sh@33 -- # echo 2048
00:05:17.572    00:36:06	-- setup/common.sh@33 -- # return 0
00:05:17.572   00:36:06	-- setup/hugepages.sh@16 -- # default_hugepages=2048
00:05:17.572   00:36:06	-- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
00:05:17.572   00:36:06	-- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages
00:05:17.572   00:36:06	-- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC
00:05:17.572   00:36:06	-- setup/hugepages.sh@22 -- # unset -v HUGEMEM
00:05:17.572   00:36:06	-- setup/hugepages.sh@23 -- # unset -v HUGENODE
00:05:17.572   00:36:06	-- setup/hugepages.sh@24 -- # unset -v NRHUGE
00:05:17.572   00:36:06	-- setup/hugepages.sh@207 -- # get_nodes
00:05:17.572   00:36:06	-- setup/hugepages.sh@27 -- # local node
00:05:17.572   00:36:06	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:17.572   00:36:06	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048
00:05:17.572   00:36:06	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:17.572   00:36:06	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:05:17.572   00:36:06	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:17.572   00:36:06	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:17.572   00:36:06	-- setup/hugepages.sh@208 -- # clear_hp
00:05:17.572   00:36:06	-- setup/hugepages.sh@37 -- # local node hp
00:05:17.572   00:36:06	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:05:17.572   00:36:06	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:17.572   00:36:06	-- setup/hugepages.sh@41 -- # echo 0
00:05:17.572   00:36:06	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:17.572   00:36:06	-- setup/hugepages.sh@41 -- # echo 0
00:05:17.572   00:36:06	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:05:17.572   00:36:06	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:17.572   00:36:06	-- setup/hugepages.sh@41 -- # echo 0
00:05:17.572   00:36:06	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:17.572   00:36:06	-- setup/hugepages.sh@41 -- # echo 0
00:05:17.572   00:36:06	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:05:17.572   00:36:06	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:05:17.572   00:36:06	-- setup/hugepages.sh@210 -- # run_test default_setup default_setup
00:05:17.572   00:36:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:17.572   00:36:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:17.572   00:36:06	-- common/autotest_common.sh@10 -- # set +x
00:05:17.572  ************************************
00:05:17.572  START TEST default_setup
00:05:17.572  ************************************
00:05:17.572   00:36:06	-- common/autotest_common.sh@1114 -- # default_setup
00:05:17.572   00:36:06	-- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0
00:05:17.572   00:36:06	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:17.572   00:36:06	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:05:17.572   00:36:06	-- setup/hugepages.sh@51 -- # shift
00:05:17.572   00:36:06	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:05:17.572   00:36:06	-- setup/hugepages.sh@52 -- # local node_ids
00:05:17.573   00:36:06	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:17.573   00:36:06	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:17.573   00:36:06	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:05:17.573   00:36:06	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:05:17.573   00:36:06	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:17.573   00:36:06	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:17.573   00:36:06	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:17.573   00:36:06	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:17.573   00:36:06	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:17.573   00:36:06	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:05:17.573   00:36:06	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:17.573   00:36:06	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:05:17.573   00:36:06	-- setup/hugepages.sh@73 -- # return 0
00:05:17.573   00:36:06	-- setup/hugepages.sh@137 -- # setup output
00:05:17.573   00:36:06	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:17.573   00:36:06	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:20.863  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:05:20.863  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:05:24.158  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:05:24.158   00:36:12	-- setup/hugepages.sh@138 -- # verify_nr_hugepages
00:05:24.158   00:36:12	-- setup/hugepages.sh@89 -- # local node
00:05:24.158   00:36:12	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:24.158   00:36:12	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:24.158   00:36:12	-- setup/hugepages.sh@92 -- # local surp
00:05:24.158   00:36:12	-- setup/hugepages.sh@93 -- # local resv
00:05:24.158   00:36:12	-- setup/hugepages.sh@94 -- # local anon
00:05:24.158   00:36:12	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:24.158    00:36:12	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:24.158    00:36:12	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:24.158    00:36:12	-- setup/common.sh@18 -- # local node=
00:05:24.158    00:36:12	-- setup/common.sh@19 -- # local var val
00:05:24.158    00:36:12	-- setup/common.sh@20 -- # local mem_f mem
00:05:24.158    00:36:12	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:24.158    00:36:12	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:24.158    00:36:12	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:24.158    00:36:12	-- setup/common.sh@28 -- # mapfile -t mem
00:05:24.158    00:36:12	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:24.158    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.158    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159     00:36:12	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77075300 kB' 'MemAvailable:   80598372 kB' 'Buffers:            8064 kB' 'Cached:         11150688 kB' 'SwapCached:            0 kB' 'Active:          7959232 kB' 'Inactive:        3690704 kB' 'Active(anon):    7571308 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        494056 kB' 'Mapped:           150196 kB' 'Shmem:           7080124 kB' 'KReclaimable:     195648 kB' 'Slab:             627428 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       431780 kB' 'KernelStack:       16160 kB' 'PageTables:         8152 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8768156 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199032 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.159    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.159    00:36:12	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:24.159    00:36:12	-- setup/common.sh@33 -- # echo 0
00:05:24.159    00:36:12	-- setup/common.sh@33 -- # return 0
00:05:24.159   00:36:12	-- setup/hugepages.sh@97 -- # anon=0
00:05:24.159    00:36:12	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:24.159    00:36:12	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:24.160    00:36:12	-- setup/common.sh@18 -- # local node=
00:05:24.160    00:36:12	-- setup/common.sh@19 -- # local var val
00:05:24.160    00:36:12	-- setup/common.sh@20 -- # local mem_f mem
00:05:24.160    00:36:12	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:24.160    00:36:12	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:24.160    00:36:12	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:24.160    00:36:12	-- setup/common.sh@28 -- # mapfile -t mem
00:05:24.160    00:36:12	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:24.160     00:36:12	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77075608 kB' 'MemAvailable:   80598680 kB' 'Buffers:            8064 kB' 'Cached:         11150696 kB' 'SwapCached:            0 kB' 'Active:          7959400 kB' 'Inactive:        3690704 kB' 'Active(anon):    7571476 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        494216 kB' 'Mapped:           150172 kB' 'Shmem:           7080132 kB' 'KReclaimable:     195648 kB' 'Slab:             627428 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       431780 kB' 'KernelStack:       16048 kB' 'PageTables:         7732 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8768172 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198936 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.160    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.160    00:36:12	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.161    00:36:12	-- setup/common.sh@33 -- # echo 0
00:05:24.161    00:36:12	-- setup/common.sh@33 -- # return 0
00:05:24.161   00:36:12	-- setup/hugepages.sh@99 -- # surp=0
00:05:24.161    00:36:12	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:24.161    00:36:12	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:24.161    00:36:12	-- setup/common.sh@18 -- # local node=
00:05:24.161    00:36:12	-- setup/common.sh@19 -- # local var val
00:05:24.161    00:36:12	-- setup/common.sh@20 -- # local mem_f mem
00:05:24.161    00:36:12	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:24.161    00:36:12	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:24.161    00:36:12	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:24.161    00:36:12	-- setup/common.sh@28 -- # mapfile -t mem
00:05:24.161    00:36:12	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:24.161     00:36:12	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77076248 kB' 'MemAvailable:   80599320 kB' 'Buffers:            8064 kB' 'Cached:         11150708 kB' 'SwapCached:            0 kB' 'Active:          7957928 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570004 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493124 kB' 'Mapped:           150076 kB' 'Shmem:           7080144 kB' 'KReclaimable:     195648 kB' 'Slab:             627396 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       431748 kB' 'KernelStack:       16112 kB' 'PageTables:         7912 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8768188 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199000 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.161    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.161    00:36:12	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.162    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.162    00:36:12	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:24.162    00:36:12	-- setup/common.sh@33 -- # echo 0
00:05:24.162    00:36:12	-- setup/common.sh@33 -- # return 0
00:05:24.162   00:36:12	-- setup/hugepages.sh@100 -- # resv=0
00:05:24.162   00:36:12	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:24.162  nr_hugepages=1024
00:05:24.162   00:36:12	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:24.162  resv_hugepages=0
00:05:24.162   00:36:12	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:24.162  surplus_hugepages=0
00:05:24.162   00:36:12	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:24.162  anon_hugepages=0
00:05:24.162   00:36:12	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:24.162   00:36:12	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:24.162    00:36:12	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:24.162    00:36:12	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:24.162    00:36:12	-- setup/common.sh@18 -- # local node=
00:05:24.162    00:36:12	-- setup/common.sh@19 -- # local var val
00:05:24.162    00:36:12	-- setup/common.sh@20 -- # local mem_f mem
00:05:24.162    00:36:12	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:24.162    00:36:12	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:24.162    00:36:12	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:24.162    00:36:12	-- setup/common.sh@28 -- # mapfile -t mem
00:05:24.163    00:36:12	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:24.163     00:36:12	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77076288 kB' 'MemAvailable:   80599360 kB' 'Buffers:            8064 kB' 'Cached:         11150720 kB' 'SwapCached:            0 kB' 'Active:          7958388 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570464 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493616 kB' 'Mapped:           150076 kB' 'Shmem:           7080156 kB' 'KReclaimable:     195648 kB' 'Slab:             627492 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       431844 kB' 'KernelStack:       16128 kB' 'PageTables:         7720 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8768200 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.163    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.163    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:12	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:12	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:24.164    00:36:13	-- setup/common.sh@33 -- # echo 1024
00:05:24.164    00:36:13	-- setup/common.sh@33 -- # return 0
00:05:24.164   00:36:13	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:24.164   00:36:13	-- setup/hugepages.sh@112 -- # get_nodes
00:05:24.164   00:36:13	-- setup/hugepages.sh@27 -- # local node
00:05:24.164   00:36:13	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:24.164   00:36:13	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:24.164   00:36:13	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:24.164   00:36:13	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:05:24.164   00:36:13	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:24.164   00:36:13	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:24.164   00:36:13	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:24.164   00:36:13	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:24.164    00:36:13	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:24.164    00:36:13	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:24.164    00:36:13	-- setup/common.sh@18 -- # local node=0
00:05:24.164    00:36:13	-- setup/common.sh@19 -- # local var val
00:05:24.164    00:36:13	-- setup/common.sh@20 -- # local mem_f mem
00:05:24.164    00:36:13	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:24.164    00:36:13	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:24.164    00:36:13	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:24.164    00:36:13	-- setup/common.sh@28 -- # mapfile -t mem
00:05:24.164    00:36:13	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164     00:36:13	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        42298200 kB' 'MemUsed:         5766648 kB' 'SwapCached:            0 kB' 'Active:          2727260 kB' 'Inactive:         117812 kB' 'Active(anon):    2462364 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448392 kB' 'Mapped:            97872 kB' 'AnonPages:        399944 kB' 'Shmem:           2065684 kB' 'KernelStack:        9784 kB' 'PageTables:         5260 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115520 kB' 'Slab:             369980 kB' 'SReclaimable:     115520 kB' 'SUnreclaim:       254460 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.164    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.164    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # continue
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # IFS=': '
00:05:24.165    00:36:13	-- setup/common.sh@31 -- # read -r var val _
00:05:24.165    00:36:13	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:24.165    00:36:13	-- setup/common.sh@33 -- # echo 0
00:05:24.165    00:36:13	-- setup/common.sh@33 -- # return 0
00:05:24.165   00:36:13	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:24.165   00:36:13	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:24.165   00:36:13	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:24.165   00:36:13	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:24.165   00:36:13	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:24.165  node0=1024 expecting 1024
00:05:24.165   00:36:13	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:24.165  
00:05:24.165  real	0m6.719s
00:05:24.165  user	0m1.396s
00:05:24.165  sys	0m2.280s
00:05:24.165   00:36:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:24.165   00:36:13	-- common/autotest_common.sh@10 -- # set +x
00:05:24.165  ************************************
00:05:24.165  END TEST default_setup
00:05:24.165  ************************************
00:05:24.165   00:36:13	-- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc
00:05:24.165   00:36:13	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:24.165   00:36:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:24.165   00:36:13	-- common/autotest_common.sh@10 -- # set +x
00:05:24.165  ************************************
00:05:24.165  START TEST per_node_1G_alloc
00:05:24.165  ************************************
00:05:24.165   00:36:13	-- common/autotest_common.sh@1114 -- # per_node_1G_alloc
00:05:24.165   00:36:13	-- setup/hugepages.sh@143 -- # local IFS=,
00:05:24.165   00:36:13	-- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1
00:05:24.165   00:36:13	-- setup/hugepages.sh@49 -- # local size=1048576
00:05:24.165   00:36:13	-- setup/hugepages.sh@50 -- # (( 3 > 1 ))
00:05:24.165   00:36:13	-- setup/hugepages.sh@51 -- # shift
00:05:24.165   00:36:13	-- setup/hugepages.sh@52 -- # node_ids=('0' '1')
00:05:24.165   00:36:13	-- setup/hugepages.sh@52 -- # local node_ids
00:05:24.165   00:36:13	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:24.165   00:36:13	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:05:24.165   00:36:13	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1
00:05:24.165   00:36:13	-- setup/hugepages.sh@62 -- # user_nodes=('0' '1')
00:05:24.165   00:36:13	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:24.165   00:36:13	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:05:24.165   00:36:13	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:24.165   00:36:13	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:24.165   00:36:13	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:24.165   00:36:13	-- setup/hugepages.sh@69 -- # (( 2 > 0 ))
00:05:24.165   00:36:13	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:24.165   00:36:13	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:05:24.165   00:36:13	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:24.165   00:36:13	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:05:24.165   00:36:13	-- setup/hugepages.sh@73 -- # return 0
00:05:24.165   00:36:13	-- setup/hugepages.sh@146 -- # NRHUGE=512
00:05:24.165   00:36:13	-- setup/hugepages.sh@146 -- # HUGENODE=0,1
00:05:24.165   00:36:13	-- setup/hugepages.sh@146 -- # setup output
00:05:24.165   00:36:13	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:24.165   00:36:13	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:27.461  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:27.461  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:05:27.461  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:05:27.461   00:36:16	-- setup/hugepages.sh@147 -- # nr_hugepages=1024
00:05:27.461   00:36:16	-- setup/hugepages.sh@147 -- # verify_nr_hugepages
00:05:27.461   00:36:16	-- setup/hugepages.sh@89 -- # local node
00:05:27.461   00:36:16	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:27.461   00:36:16	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:27.461   00:36:16	-- setup/hugepages.sh@92 -- # local surp
00:05:27.461   00:36:16	-- setup/hugepages.sh@93 -- # local resv
00:05:27.461   00:36:16	-- setup/hugepages.sh@94 -- # local anon
00:05:27.461   00:36:16	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:27.461    00:36:16	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:27.461    00:36:16	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:27.461    00:36:16	-- setup/common.sh@18 -- # local node=
00:05:27.461    00:36:16	-- setup/common.sh@19 -- # local var val
00:05:27.461    00:36:16	-- setup/common.sh@20 -- # local mem_f mem
00:05:27.461    00:36:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:27.461    00:36:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:27.461    00:36:16	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:27.461    00:36:16	-- setup/common.sh@28 -- # mapfile -t mem
00:05:27.461    00:36:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461     00:36:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77082492 kB' 'MemAvailable:   80605564 kB' 'Buffers:            8064 kB' 'Cached:         11150792 kB' 'SwapCached:            0 kB' 'Active:          7958600 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570676 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493668 kB' 'Mapped:           150104 kB' 'Shmem:           7080228 kB' 'KReclaimable:     195648 kB' 'Slab:             628288 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432640 kB' 'KernelStack:       15984 kB' 'PageTables:         7432 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8764336 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199128 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.461    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.461    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:27.462    00:36:16	-- setup/common.sh@33 -- # echo 0
00:05:27.462    00:36:16	-- setup/common.sh@33 -- # return 0
00:05:27.462   00:36:16	-- setup/hugepages.sh@97 -- # anon=0
00:05:27.462    00:36:16	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:27.462    00:36:16	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:27.462    00:36:16	-- setup/common.sh@18 -- # local node=
00:05:27.462    00:36:16	-- setup/common.sh@19 -- # local var val
00:05:27.462    00:36:16	-- setup/common.sh@20 -- # local mem_f mem
00:05:27.462    00:36:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:27.462    00:36:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:27.462    00:36:16	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:27.462    00:36:16	-- setup/common.sh@28 -- # mapfile -t mem
00:05:27.462    00:36:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462     00:36:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77086544 kB' 'MemAvailable:   80609616 kB' 'Buffers:            8064 kB' 'Cached:         11150796 kB' 'SwapCached:            0 kB' 'Active:          7958316 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570392 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493392 kB' 'Mapped:           150072 kB' 'Shmem:           7080232 kB' 'KReclaimable:     195648 kB' 'Slab:             628260 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432612 kB' 'KernelStack:       16016 kB' 'PageTables:         7496 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8764348 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199096 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.462    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.462    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.463    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.463    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.463    00:36:16	-- setup/common.sh@33 -- # echo 0
00:05:27.464    00:36:16	-- setup/common.sh@33 -- # return 0
00:05:27.464   00:36:16	-- setup/hugepages.sh@99 -- # surp=0
00:05:27.464    00:36:16	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:27.464    00:36:16	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:27.464    00:36:16	-- setup/common.sh@18 -- # local node=
00:05:27.464    00:36:16	-- setup/common.sh@19 -- # local var val
00:05:27.464    00:36:16	-- setup/common.sh@20 -- # local mem_f mem
00:05:27.464    00:36:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:27.464    00:36:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:27.464    00:36:16	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:27.464    00:36:16	-- setup/common.sh@28 -- # mapfile -t mem
00:05:27.464    00:36:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464     00:36:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77087336 kB' 'MemAvailable:   80610408 kB' 'Buffers:            8064 kB' 'Cached:         11150796 kB' 'SwapCached:            0 kB' 'Active:          7958288 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570364 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493364 kB' 'Mapped:           150072 kB' 'Shmem:           7080232 kB' 'KReclaimable:     195648 kB' 'Slab:             628328 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432680 kB' 'KernelStack:       16016 kB' 'PageTables:         7516 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8764360 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199096 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.464    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.464    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:27.465    00:36:16	-- setup/common.sh@33 -- # echo 0
00:05:27.465    00:36:16	-- setup/common.sh@33 -- # return 0
00:05:27.465   00:36:16	-- setup/hugepages.sh@100 -- # resv=0
00:05:27.465   00:36:16	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:27.465  nr_hugepages=1024
00:05:27.465   00:36:16	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:27.465  resv_hugepages=0
00:05:27.465   00:36:16	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:27.465  surplus_hugepages=0
00:05:27.465   00:36:16	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:27.465  anon_hugepages=0
00:05:27.465   00:36:16	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:27.465   00:36:16	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:27.465    00:36:16	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:27.465    00:36:16	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:27.465    00:36:16	-- setup/common.sh@18 -- # local node=
00:05:27.465    00:36:16	-- setup/common.sh@19 -- # local var val
00:05:27.465    00:36:16	-- setup/common.sh@20 -- # local mem_f mem
00:05:27.465    00:36:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:27.465    00:36:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:27.465    00:36:16	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:27.465    00:36:16	-- setup/common.sh@28 -- # mapfile -t mem
00:05:27.465    00:36:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465     00:36:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77087228 kB' 'MemAvailable:   80610300 kB' 'Buffers:            8064 kB' 'Cached:         11150836 kB' 'SwapCached:            0 kB' 'Active:          7957940 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570016 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        492960 kB' 'Mapped:           150072 kB' 'Shmem:           7080272 kB' 'KReclaimable:     195648 kB' 'Slab:             628328 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432680 kB' 'KernelStack:       16000 kB' 'PageTables:         7468 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8764376 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199096 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.465    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.465    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.466    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.466    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:27.466    00:36:16	-- setup/common.sh@33 -- # echo 1024
00:05:27.466    00:36:16	-- setup/common.sh@33 -- # return 0
00:05:27.466   00:36:16	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:27.466   00:36:16	-- setup/hugepages.sh@112 -- # get_nodes
00:05:27.466   00:36:16	-- setup/hugepages.sh@27 -- # local node
00:05:27.466   00:36:16	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:27.466   00:36:16	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:27.466   00:36:16	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:27.467   00:36:16	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:27.467   00:36:16	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:27.467   00:36:16	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:27.467   00:36:16	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:27.467   00:36:16	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:27.467    00:36:16	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:27.467    00:36:16	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:27.467    00:36:16	-- setup/common.sh@18 -- # local node=0
00:05:27.467    00:36:16	-- setup/common.sh@19 -- # local var val
00:05:27.467    00:36:16	-- setup/common.sh@20 -- # local mem_f mem
00:05:27.467    00:36:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:27.467    00:36:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:27.467    00:36:16	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:27.467    00:36:16	-- setup/common.sh@28 -- # mapfile -t mem
00:05:27.467    00:36:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467     00:36:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43347048 kB' 'MemUsed:         4717800 kB' 'SwapCached:            0 kB' 'Active:          2726612 kB' 'Inactive:         117812 kB' 'Active(anon):    2461716 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448472 kB' 'Mapped:            97872 kB' 'AnonPages:        399088 kB' 'Shmem:           2065764 kB' 'KernelStack:        9720 kB' 'PageTables:         5120 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115520 kB' 'Slab:             370248 kB' 'SReclaimable:     115520 kB' 'SUnreclaim:       254728 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.467    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.467    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@33 -- # echo 0
00:05:27.468    00:36:16	-- setup/common.sh@33 -- # return 0
00:05:27.468   00:36:16	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:27.468   00:36:16	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:27.468   00:36:16	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:27.468    00:36:16	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:05:27.468    00:36:16	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:27.468    00:36:16	-- setup/common.sh@18 -- # local node=1
00:05:27.468    00:36:16	-- setup/common.sh@19 -- # local var val
00:05:27.468    00:36:16	-- setup/common.sh@20 -- # local mem_f mem
00:05:27.468    00:36:16	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:27.468    00:36:16	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:05:27.468    00:36:16	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:05:27.468    00:36:16	-- setup/common.sh@28 -- # mapfile -t mem
00:05:27.468    00:36:16	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468     00:36:16	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220548 kB' 'MemFree:        33740588 kB' 'MemUsed:        10479960 kB' 'SwapCached:            0 kB' 'Active:          5231340 kB' 'Inactive:        3572892 kB' 'Active(anon):    5108312 kB' 'Inactive(anon):        0 kB' 'Active(file):     123028 kB' 'Inactive(file):  3572892 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8710440 kB' 'Mapped:            52200 kB' 'AnonPages:         93872 kB' 'Shmem:           5014520 kB' 'KernelStack:        6280 kB' 'PageTables:         2348 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80128 kB' 'Slab:             258080 kB' 'SReclaimable:      80128 kB' 'SUnreclaim:       177952 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.468    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.468    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.469    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.469    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.469    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.469    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.469    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.469    00:36:16	-- setup/common.sh@32 -- # continue
00:05:27.469    00:36:16	-- setup/common.sh@31 -- # IFS=': '
00:05:27.469    00:36:16	-- setup/common.sh@31 -- # read -r var val _
00:05:27.469    00:36:16	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:27.469    00:36:16	-- setup/common.sh@33 -- # echo 0
00:05:27.469    00:36:16	-- setup/common.sh@33 -- # return 0
00:05:27.469   00:36:16	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:27.469   00:36:16	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:27.469   00:36:16	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:27.469   00:36:16	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:05:27.469  node0=512 expecting 512
00:05:27.469   00:36:16	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:27.469   00:36:16	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:27.469   00:36:16	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:27.469   00:36:16	-- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512'
00:05:27.469  node1=512 expecting 512
00:05:27.469   00:36:16	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:05:27.469  
00:05:27.469  real	0m3.393s
00:05:27.469  user	0m1.323s
00:05:27.469  sys	0m2.163s
00:05:27.469   00:36:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:27.469   00:36:16	-- common/autotest_common.sh@10 -- # set +x
00:05:27.469  ************************************
00:05:27.469  END TEST per_node_1G_alloc
00:05:27.469  ************************************
00:05:27.469   00:36:16	-- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc
00:05:27.469   00:36:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:27.469   00:36:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:27.469   00:36:16	-- common/autotest_common.sh@10 -- # set +x
00:05:27.469  ************************************
00:05:27.469  START TEST even_2G_alloc
00:05:27.469  ************************************
00:05:27.469   00:36:16	-- common/autotest_common.sh@1114 -- # even_2G_alloc
00:05:27.469   00:36:16	-- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152
00:05:27.469   00:36:16	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:27.469   00:36:16	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:27.469   00:36:16	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:27.469   00:36:16	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:27.469   00:36:16	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:27.469   00:36:16	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:27.469   00:36:16	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:27.469   00:36:16	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:27.469   00:36:16	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:27.469   00:36:16	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:05:27.469   00:36:16	-- setup/hugepages.sh@83 -- # : 512
00:05:27.469   00:36:16	-- setup/hugepages.sh@84 -- # : 1
00:05:27.469   00:36:16	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:05:27.469   00:36:16	-- setup/hugepages.sh@83 -- # : 0
00:05:27.469   00:36:16	-- setup/hugepages.sh@84 -- # : 0
00:05:27.469   00:36:16	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:27.469   00:36:16	-- setup/hugepages.sh@153 -- # NRHUGE=1024
00:05:27.469   00:36:16	-- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes
00:05:27.469   00:36:16	-- setup/hugepages.sh@153 -- # setup output
00:05:27.469   00:36:16	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:27.469   00:36:16	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:30.763  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:30.763  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:05:30.763  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:05:30.763   00:36:19	-- setup/hugepages.sh@154 -- # verify_nr_hugepages
00:05:30.763   00:36:19	-- setup/hugepages.sh@89 -- # local node
00:05:30.763   00:36:19	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:30.763   00:36:19	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:30.763   00:36:19	-- setup/hugepages.sh@92 -- # local surp
00:05:30.763   00:36:19	-- setup/hugepages.sh@93 -- # local resv
00:05:30.763   00:36:19	-- setup/hugepages.sh@94 -- # local anon
00:05:30.763   00:36:19	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:30.763    00:36:19	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:30.763    00:36:19	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:30.763    00:36:19	-- setup/common.sh@18 -- # local node=
00:05:30.763    00:36:19	-- setup/common.sh@19 -- # local var val
00:05:30.763    00:36:19	-- setup/common.sh@20 -- # local mem_f mem
00:05:30.763    00:36:19	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:30.763    00:36:19	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:30.763    00:36:19	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:30.763    00:36:19	-- setup/common.sh@28 -- # mapfile -t mem
00:05:30.763    00:36:19	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:30.763    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.763    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.763     00:36:19	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77083324 kB' 'MemAvailable:   80606396 kB' 'Buffers:            8064 kB' 'Cached:         11150912 kB' 'SwapCached:            0 kB' 'Active:          7962772 kB' 'Inactive:        3690704 kB' 'Active(anon):    7574848 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        497364 kB' 'Mapped:           150064 kB' 'Shmem:           7080348 kB' 'KReclaimable:     195648 kB' 'Slab:             628008 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432360 kB' 'KernelStack:       15920 kB' 'PageTables:         7644 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8765780 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199128 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:30.763    00:36:19	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.763    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.763    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.763    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.763    00:36:19	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.763    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.763    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.763    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.764    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.764    00:36:19	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:30.765    00:36:19	-- setup/common.sh@33 -- # echo 0
00:05:30.765    00:36:19	-- setup/common.sh@33 -- # return 0
00:05:30.765   00:36:19	-- setup/hugepages.sh@97 -- # anon=0
00:05:30.765    00:36:19	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:30.765    00:36:19	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:30.765    00:36:19	-- setup/common.sh@18 -- # local node=
00:05:30.765    00:36:19	-- setup/common.sh@19 -- # local var val
00:05:30.765    00:36:19	-- setup/common.sh@20 -- # local mem_f mem
00:05:30.765    00:36:19	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:30.765    00:36:19	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:30.765    00:36:19	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:30.765    00:36:19	-- setup/common.sh@28 -- # mapfile -t mem
00:05:30.765    00:36:19	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765     00:36:19	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77084836 kB' 'MemAvailable:   80607908 kB' 'Buffers:            8064 kB' 'Cached:         11150912 kB' 'SwapCached:            0 kB' 'Active:          7964080 kB' 'Inactive:        3690704 kB' 'Active(anon):    7576156 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        499104 kB' 'Mapped:           150060 kB' 'Shmem:           7080348 kB' 'KReclaimable:     195648 kB' 'Slab:             627892 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432244 kB' 'KernelStack:       16064 kB' 'PageTables:         7772 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8768060 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199160 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.765    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.765    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.766    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.766    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.767    00:36:19	-- setup/common.sh@33 -- # echo 0
00:05:30.767    00:36:19	-- setup/common.sh@33 -- # return 0
00:05:30.767   00:36:19	-- setup/hugepages.sh@99 -- # surp=0
00:05:30.767    00:36:19	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:30.767    00:36:19	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:30.767    00:36:19	-- setup/common.sh@18 -- # local node=
00:05:30.767    00:36:19	-- setup/common.sh@19 -- # local var val
00:05:30.767    00:36:19	-- setup/common.sh@20 -- # local mem_f mem
00:05:30.767    00:36:19	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:30.767    00:36:19	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:30.767    00:36:19	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:30.767    00:36:19	-- setup/common.sh@28 -- # mapfile -t mem
00:05:30.767    00:36:19	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767     00:36:19	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77086000 kB' 'MemAvailable:   80609072 kB' 'Buffers:            8064 kB' 'Cached:         11150924 kB' 'SwapCached:            0 kB' 'Active:          7958824 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570900 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493960 kB' 'Mapped:           149580 kB' 'Shmem:           7080360 kB' 'KReclaimable:     195648 kB' 'Slab:             627892 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432244 kB' 'KernelStack:       16112 kB' 'PageTables:         7756 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8757764 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199080 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.767    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.767    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.768    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.768    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:30.769    00:36:19	-- setup/common.sh@33 -- # echo 0
00:05:30.769    00:36:19	-- setup/common.sh@33 -- # return 0
00:05:30.769   00:36:19	-- setup/hugepages.sh@100 -- # resv=0
00:05:30.769   00:36:19	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:30.769  nr_hugepages=1024
00:05:30.769   00:36:19	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:30.769  resv_hugepages=0
00:05:30.769   00:36:19	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:30.769  surplus_hugepages=0
00:05:30.769   00:36:19	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:30.769  anon_hugepages=0
00:05:30.769   00:36:19	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:30.769   00:36:19	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:30.769    00:36:19	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:30.769    00:36:19	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:30.769    00:36:19	-- setup/common.sh@18 -- # local node=
00:05:30.769    00:36:19	-- setup/common.sh@19 -- # local var val
00:05:30.769    00:36:19	-- setup/common.sh@20 -- # local mem_f mem
00:05:30.769    00:36:19	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:30.769    00:36:19	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:30.769    00:36:19	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:30.769    00:36:19	-- setup/common.sh@28 -- # mapfile -t mem
00:05:30.769    00:36:19	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769     00:36:19	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77086520 kB' 'MemAvailable:   80609592 kB' 'Buffers:            8064 kB' 'Cached:         11150940 kB' 'SwapCached:            0 kB' 'Active:          7957468 kB' 'Inactive:        3690704 kB' 'Active(anon):    7569544 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        492472 kB' 'Mapped:           149220 kB' 'Shmem:           7080376 kB' 'KReclaimable:     195648 kB' 'Slab:             627872 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432224 kB' 'KernelStack:       15968 kB' 'PageTables:         7308 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8757780 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199064 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.769    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.769    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.770    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.770    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:30.771    00:36:19	-- setup/common.sh@33 -- # echo 1024
00:05:30.771    00:36:19	-- setup/common.sh@33 -- # return 0
00:05:30.771   00:36:19	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:30.771   00:36:19	-- setup/hugepages.sh@112 -- # get_nodes
00:05:30.771   00:36:19	-- setup/hugepages.sh@27 -- # local node
00:05:30.771   00:36:19	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:30.771   00:36:19	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:30.771   00:36:19	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:30.771   00:36:19	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:30.771   00:36:19	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:30.771   00:36:19	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:30.771   00:36:19	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:30.771   00:36:19	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:30.771    00:36:19	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:30.771    00:36:19	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:30.771    00:36:19	-- setup/common.sh@18 -- # local node=0
00:05:30.771    00:36:19	-- setup/common.sh@19 -- # local var val
00:05:30.771    00:36:19	-- setup/common.sh@20 -- # local mem_f mem
00:05:30.771    00:36:19	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:30.771    00:36:19	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:30.771    00:36:19	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:30.771    00:36:19	-- setup/common.sh@28 -- # mapfile -t mem
00:05:30.771    00:36:19	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771     00:36:19	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43355080 kB' 'MemUsed:         4709768 kB' 'SwapCached:            0 kB' 'Active:          2726928 kB' 'Inactive:         117812 kB' 'Active(anon):    2462032 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448572 kB' 'Mapped:            97356 kB' 'AnonPages:        399376 kB' 'Shmem:           2065864 kB' 'KernelStack:        9720 kB' 'PageTables:         5128 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115520 kB' 'Slab:             369888 kB' 'SReclaimable:     115520 kB' 'SUnreclaim:       254368 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.771    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.771    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.772    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.772    00:36:19	-- setup/common.sh@33 -- # echo 0
00:05:30.772    00:36:19	-- setup/common.sh@33 -- # return 0
00:05:30.772   00:36:19	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:30.772   00:36:19	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:30.772   00:36:19	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:30.772    00:36:19	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:05:30.772    00:36:19	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:30.772    00:36:19	-- setup/common.sh@18 -- # local node=1
00:05:30.772    00:36:19	-- setup/common.sh@19 -- # local var val
00:05:30.772    00:36:19	-- setup/common.sh@20 -- # local mem_f mem
00:05:30.772    00:36:19	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:30.772    00:36:19	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:05:30.772    00:36:19	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:05:30.772    00:36:19	-- setup/common.sh@28 -- # mapfile -t mem
00:05:30.772    00:36:19	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.772    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773     00:36:19	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220548 kB' 'MemFree:        33731884 kB' 'MemUsed:        10488664 kB' 'SwapCached:            0 kB' 'Active:          5230164 kB' 'Inactive:        3572892 kB' 'Active(anon):    5107136 kB' 'Inactive(anon):        0 kB' 'Active(file):     123028 kB' 'Inactive(file):  3572892 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8710456 kB' 'Mapped:            51864 kB' 'AnonPages:         92692 kB' 'Shmem:           5014536 kB' 'KernelStack:        6232 kB' 'PageTables:         2132 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80128 kB' 'Slab:             257984 kB' 'SReclaimable:      80128 kB' 'SUnreclaim:       177856 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # continue
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # IFS=': '
00:05:30.773    00:36:19	-- setup/common.sh@31 -- # read -r var val _
00:05:30.773    00:36:19	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:30.773    00:36:19	-- setup/common.sh@33 -- # echo 0
00:05:30.773    00:36:19	-- setup/common.sh@33 -- # return 0
00:05:30.773   00:36:19	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:30.773   00:36:19	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:30.773   00:36:19	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:30.773   00:36:19	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:30.773   00:36:19	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:05:30.773  node0=512 expecting 512
00:05:30.773   00:36:19	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:30.773   00:36:19	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:30.773   00:36:19	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:30.773   00:36:19	-- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512'
00:05:30.773  node1=512 expecting 512
00:05:30.773   00:36:19	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:05:30.773  
00:05:30.773  real	0m3.461s
00:05:30.773  user	0m1.311s
00:05:30.773  sys	0m2.246s
00:05:30.773   00:36:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:30.773   00:36:19	-- common/autotest_common.sh@10 -- # set +x
00:05:30.773  ************************************
00:05:30.773  END TEST even_2G_alloc
00:05:30.774  ************************************
00:05:31.033   00:36:20	-- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc
00:05:31.033   00:36:20	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:31.033   00:36:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:31.033   00:36:20	-- common/autotest_common.sh@10 -- # set +x
00:05:31.033  ************************************
00:05:31.033  START TEST odd_alloc
00:05:31.033  ************************************
00:05:31.033   00:36:20	-- common/autotest_common.sh@1114 -- # odd_alloc
00:05:31.033   00:36:20	-- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176
00:05:31.033   00:36:20	-- setup/hugepages.sh@49 -- # local size=2098176
00:05:31.033   00:36:20	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@57 -- # nr_hugepages=1025
00:05:31.033   00:36:20	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:31.033   00:36:20	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:31.033   00:36:20	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:31.033   00:36:20	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1025
00:05:31.033   00:36:20	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:31.033   00:36:20	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:31.033   00:36:20	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:31.033   00:36:20	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:05:31.033   00:36:20	-- setup/hugepages.sh@83 -- # : 513
00:05:31.033   00:36:20	-- setup/hugepages.sh@84 -- # : 1
00:05:31.033   00:36:20	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513
00:05:31.033   00:36:20	-- setup/hugepages.sh@83 -- # : 0
00:05:31.033   00:36:20	-- setup/hugepages.sh@84 -- # : 0
00:05:31.033   00:36:20	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:31.033   00:36:20	-- setup/hugepages.sh@160 -- # HUGEMEM=2049
00:05:31.033   00:36:20	-- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes
00:05:31.033   00:36:20	-- setup/hugepages.sh@160 -- # setup output
00:05:31.033   00:36:20	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:31.033   00:36:20	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:34.329  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:34.329  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:05:34.329  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:05:34.329   00:36:23	-- setup/hugepages.sh@161 -- # verify_nr_hugepages
00:05:34.329   00:36:23	-- setup/hugepages.sh@89 -- # local node
00:05:34.329   00:36:23	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:34.329   00:36:23	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:34.329   00:36:23	-- setup/hugepages.sh@92 -- # local surp
00:05:34.329   00:36:23	-- setup/hugepages.sh@93 -- # local resv
00:05:34.330   00:36:23	-- setup/hugepages.sh@94 -- # local anon
00:05:34.330   00:36:23	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:34.330    00:36:23	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:34.330    00:36:23	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:34.330    00:36:23	-- setup/common.sh@18 -- # local node=
00:05:34.330    00:36:23	-- setup/common.sh@19 -- # local var val
00:05:34.330    00:36:23	-- setup/common.sh@20 -- # local mem_f mem
00:05:34.330    00:36:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:34.330    00:36:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:34.330    00:36:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:34.330    00:36:23	-- setup/common.sh@28 -- # mapfile -t mem
00:05:34.330    00:36:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330     00:36:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77091856 kB' 'MemAvailable:   80614928 kB' 'Buffers:            8064 kB' 'Cached:         11151020 kB' 'SwapCached:            0 kB' 'Active:          7958912 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570988 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493672 kB' 'Mapped:           149224 kB' 'Shmem:           7080456 kB' 'KReclaimable:     195648 kB' 'Slab:             627728 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432080 kB' 'KernelStack:       15984 kB' 'PageTables:         7316 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481700 kB' 'Committed_AS:    8758236 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199000 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.330    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.330    00:36:23	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:34.331    00:36:23	-- setup/common.sh@33 -- # echo 0
00:05:34.331    00:36:23	-- setup/common.sh@33 -- # return 0
00:05:34.331   00:36:23	-- setup/hugepages.sh@97 -- # anon=0
00:05:34.331    00:36:23	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:34.331    00:36:23	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:34.331    00:36:23	-- setup/common.sh@18 -- # local node=
00:05:34.331    00:36:23	-- setup/common.sh@19 -- # local var val
00:05:34.331    00:36:23	-- setup/common.sh@20 -- # local mem_f mem
00:05:34.331    00:36:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:34.331    00:36:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:34.331    00:36:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:34.331    00:36:23	-- setup/common.sh@28 -- # mapfile -t mem
00:05:34.331    00:36:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331     00:36:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77091604 kB' 'MemAvailable:   80614676 kB' 'Buffers:            8064 kB' 'Cached:         11151024 kB' 'SwapCached:            0 kB' 'Active:          7958620 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570696 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493452 kB' 'Mapped:           149200 kB' 'Shmem:           7080460 kB' 'KReclaimable:     195648 kB' 'Slab:             627780 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432132 kB' 'KernelStack:       15968 kB' 'PageTables:         7300 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481700 kB' 'Committed_AS:    8758248 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198968 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.331    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.331    00:36:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.332    00:36:23	-- setup/common.sh@33 -- # echo 0
00:05:34.332    00:36:23	-- setup/common.sh@33 -- # return 0
00:05:34.332   00:36:23	-- setup/hugepages.sh@99 -- # surp=0
00:05:34.332    00:36:23	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:34.332    00:36:23	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:34.332    00:36:23	-- setup/common.sh@18 -- # local node=
00:05:34.332    00:36:23	-- setup/common.sh@19 -- # local var val
00:05:34.332    00:36:23	-- setup/common.sh@20 -- # local mem_f mem
00:05:34.332    00:36:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:34.332    00:36:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:34.332    00:36:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:34.332    00:36:23	-- setup/common.sh@28 -- # mapfile -t mem
00:05:34.332    00:36:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332     00:36:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77091604 kB' 'MemAvailable:   80614676 kB' 'Buffers:            8064 kB' 'Cached:         11151024 kB' 'SwapCached:            0 kB' 'Active:          7958620 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570696 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493452 kB' 'Mapped:           149200 kB' 'Shmem:           7080460 kB' 'KReclaimable:     195648 kB' 'Slab:             627780 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432132 kB' 'KernelStack:       15968 kB' 'PageTables:         7300 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481700 kB' 'Committed_AS:    8758264 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198968 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.332    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.332    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.333    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.333    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:34.333    00:36:23	-- setup/common.sh@33 -- # echo 0
00:05:34.333    00:36:23	-- setup/common.sh@33 -- # return 0
00:05:34.333   00:36:23	-- setup/hugepages.sh@100 -- # resv=0
00:05:34.333   00:36:23	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1025
00:05:34.333  nr_hugepages=1025
00:05:34.333   00:36:23	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:34.333  resv_hugepages=0
00:05:34.333   00:36:23	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:34.333  surplus_hugepages=0
00:05:34.334   00:36:23	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:34.334  anon_hugepages=0
00:05:34.334   00:36:23	-- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv ))
00:05:34.334   00:36:23	-- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages ))
00:05:34.334    00:36:23	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:34.334    00:36:23	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:34.334    00:36:23	-- setup/common.sh@18 -- # local node=
00:05:34.334    00:36:23	-- setup/common.sh@19 -- # local var val
00:05:34.334    00:36:23	-- setup/common.sh@20 -- # local mem_f mem
00:05:34.334    00:36:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:34.334    00:36:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:34.334    00:36:23	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:34.334    00:36:23	-- setup/common.sh@28 -- # mapfile -t mem
00:05:34.334    00:36:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334     00:36:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77091604 kB' 'MemAvailable:   80614676 kB' 'Buffers:            8064 kB' 'Cached:         11151028 kB' 'SwapCached:            0 kB' 'Active:          7958768 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570844 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493596 kB' 'Mapped:           149200 kB' 'Shmem:           7080464 kB' 'KReclaimable:     195648 kB' 'Slab:             627780 kB' 'SReclaimable:     195648 kB' 'SUnreclaim:       432132 kB' 'KernelStack:       15952 kB' 'PageTables:         7252 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481700 kB' 'Committed_AS:    8758276 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198968 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.334    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.334    00:36:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:34.335    00:36:23	-- setup/common.sh@33 -- # echo 1025
00:05:34.335    00:36:23	-- setup/common.sh@33 -- # return 0
00:05:34.335   00:36:23	-- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv ))
00:05:34.335   00:36:23	-- setup/hugepages.sh@112 -- # get_nodes
00:05:34.335   00:36:23	-- setup/hugepages.sh@27 -- # local node
00:05:34.335   00:36:23	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:34.335   00:36:23	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:34.335   00:36:23	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:34.335   00:36:23	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513
00:05:34.335   00:36:23	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:34.335   00:36:23	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:34.335   00:36:23	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:34.335   00:36:23	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:34.335    00:36:23	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:34.335    00:36:23	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:34.335    00:36:23	-- setup/common.sh@18 -- # local node=0
00:05:34.335    00:36:23	-- setup/common.sh@19 -- # local var val
00:05:34.335    00:36:23	-- setup/common.sh@20 -- # local mem_f mem
00:05:34.335    00:36:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:34.335    00:36:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:34.335    00:36:23	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:34.335    00:36:23	-- setup/common.sh@28 -- # mapfile -t mem
00:05:34.335    00:36:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335     00:36:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43353004 kB' 'MemUsed:         4711844 kB' 'SwapCached:            0 kB' 'Active:          2728232 kB' 'Inactive:         117812 kB' 'Active(anon):    2463336 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448600 kB' 'Mapped:            97356 kB' 'AnonPages:        400584 kB' 'Shmem:           2065892 kB' 'KernelStack:        9752 kB' 'PageTables:         5220 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115520 kB' 'Slab:             369732 kB' 'SReclaimable:     115520 kB' 'SUnreclaim:       254212 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.335    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.335    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@33 -- # echo 0
00:05:34.336    00:36:23	-- setup/common.sh@33 -- # return 0
00:05:34.336   00:36:23	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:34.336   00:36:23	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:34.336   00:36:23	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:34.336    00:36:23	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:05:34.336    00:36:23	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:34.336    00:36:23	-- setup/common.sh@18 -- # local node=1
00:05:34.336    00:36:23	-- setup/common.sh@19 -- # local var val
00:05:34.336    00:36:23	-- setup/common.sh@20 -- # local mem_f mem
00:05:34.336    00:36:23	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:34.336    00:36:23	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:05:34.336    00:36:23	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:05:34.336    00:36:23	-- setup/common.sh@28 -- # mapfile -t mem
00:05:34.336    00:36:23	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336     00:36:23	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220548 kB' 'MemFree:        33738872 kB' 'MemUsed:        10481676 kB' 'SwapCached:            0 kB' 'Active:          5230340 kB' 'Inactive:        3572892 kB' 'Active(anon):    5107312 kB' 'Inactive(anon):        0 kB' 'Active(file):     123028 kB' 'Inactive(file):  3572892 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8710540 kB' 'Mapped:            51844 kB' 'AnonPages:         92760 kB' 'Shmem:           5014620 kB' 'KernelStack:        6200 kB' 'PageTables:         2032 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80128 kB' 'Slab:             258048 kB' 'SReclaimable:      80128 kB' 'SUnreclaim:       177920 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   513' 'HugePages_Free:    513' 'HugePages_Surp:      0'
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.336    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.336    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # continue
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # IFS=': '
00:05:34.337    00:36:23	-- setup/common.sh@31 -- # read -r var val _
00:05:34.337    00:36:23	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:34.337    00:36:23	-- setup/common.sh@33 -- # echo 0
00:05:34.337    00:36:23	-- setup/common.sh@33 -- # return 0
00:05:34.337   00:36:23	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:34.337   00:36:23	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:34.337   00:36:23	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:34.337   00:36:23	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513'
00:05:34.337  node0=512 expecting 513
00:05:34.337   00:36:23	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:34.337   00:36:23	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:34.337   00:36:23	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:34.337   00:36:23	-- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512'
00:05:34.337  node1=513 expecting 512
00:05:34.337   00:36:23	-- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]]
00:05:34.337  
00:05:34.337  real	0m3.394s
00:05:34.337  user	0m1.239s
00:05:34.337  sys	0m2.245s
00:05:34.337   00:36:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:34.337   00:36:23	-- common/autotest_common.sh@10 -- # set +x
00:05:34.337  ************************************
00:05:34.337  END TEST odd_alloc
00:05:34.337  ************************************
00:05:34.337   00:36:23	-- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc
00:05:34.337   00:36:23	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:34.337   00:36:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:34.337   00:36:23	-- common/autotest_common.sh@10 -- # set +x
00:05:34.337  ************************************
00:05:34.337  START TEST custom_alloc
00:05:34.337  ************************************
00:05:34.337   00:36:23	-- common/autotest_common.sh@1114 -- # custom_alloc
00:05:34.337   00:36:23	-- setup/hugepages.sh@167 -- # local IFS=,
00:05:34.337   00:36:23	-- setup/hugepages.sh@169 -- # local node
00:05:34.337   00:36:23	-- setup/hugepages.sh@170 -- # nodes_hp=()
00:05:34.337   00:36:23	-- setup/hugepages.sh@170 -- # local nodes_hp
00:05:34.337   00:36:23	-- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0
00:05:34.337   00:36:23	-- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576
00:05:34.337   00:36:23	-- setup/hugepages.sh@49 -- # local size=1048576
00:05:34.337   00:36:23	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:05:34.337   00:36:23	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:34.337   00:36:23	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:34.337   00:36:23	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:34.337   00:36:23	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:05:34.337   00:36:23	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:34.337   00:36:23	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:34.337   00:36:23	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:34.337   00:36:23	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256
00:05:34.337   00:36:23	-- setup/hugepages.sh@83 -- # : 256
00:05:34.337   00:36:23	-- setup/hugepages.sh@84 -- # : 1
00:05:34.337   00:36:23	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256
00:05:34.337   00:36:23	-- setup/hugepages.sh@83 -- # : 0
00:05:34.337   00:36:23	-- setup/hugepages.sh@84 -- # : 0
00:05:34.337   00:36:23	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@175 -- # nodes_hp[0]=512
00:05:34.337   00:36:23	-- setup/hugepages.sh@176 -- # (( 2 > 1 ))
00:05:34.337   00:36:23	-- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152
00:05:34.337   00:36:23	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:34.337   00:36:23	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:34.338   00:36:23	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:05:34.338   00:36:23	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:34.338   00:36:23	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:34.338   00:36:23	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:34.338   00:36:23	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:34.338   00:36:23	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:34.338   00:36:23	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:34.338   00:36:23	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@74 -- # (( 1 > 0 ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:05:34.338   00:36:23	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:05:34.338   00:36:23	-- setup/hugepages.sh@78 -- # return 0
00:05:34.338   00:36:23	-- setup/hugepages.sh@178 -- # nodes_hp[1]=1024
00:05:34.338   00:36:23	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:05:34.338   00:36:23	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:05:34.338   00:36:23	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:05:34.338   00:36:23	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:05:34.338   00:36:23	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node
00:05:34.338   00:36:23	-- setup/hugepages.sh@62 -- # user_nodes=()
00:05:34.338   00:36:23	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:34.338   00:36:23	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:34.338   00:36:23	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:34.338   00:36:23	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:34.338   00:36:23	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:34.338   00:36:23	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@74 -- # (( 2 > 0 ))
00:05:34.338   00:36:23	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:05:34.338   00:36:23	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:05:34.338   00:36:23	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:05:34.338   00:36:23	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024
00:05:34.338   00:36:23	-- setup/hugepages.sh@78 -- # return 0
00:05:34.338   00:36:23	-- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024'
00:05:34.338   00:36:23	-- setup/hugepages.sh@187 -- # setup output
00:05:34.338   00:36:23	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:34.338   00:36:23	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:37.629  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:37.629  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:05:37.629  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:05:37.629   00:36:26	-- setup/hugepages.sh@188 -- # nr_hugepages=1536
00:05:37.629   00:36:26	-- setup/hugepages.sh@188 -- # verify_nr_hugepages
00:05:37.629   00:36:26	-- setup/hugepages.sh@89 -- # local node
00:05:37.629   00:36:26	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:37.629   00:36:26	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:37.629   00:36:26	-- setup/hugepages.sh@92 -- # local surp
00:05:37.629   00:36:26	-- setup/hugepages.sh@93 -- # local resv
00:05:37.629   00:36:26	-- setup/hugepages.sh@94 -- # local anon
00:05:37.629   00:36:26	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:37.629    00:36:26	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:37.629    00:36:26	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:37.629    00:36:26	-- setup/common.sh@18 -- # local node=
00:05:37.629    00:36:26	-- setup/common.sh@19 -- # local var val
00:05:37.629    00:36:26	-- setup/common.sh@20 -- # local mem_f mem
00:05:37.629    00:36:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:37.629    00:36:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:37.629    00:36:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:37.629    00:36:26	-- setup/common.sh@28 -- # mapfile -t mem
00:05:37.629    00:36:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629     00:36:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        76041188 kB' 'MemAvailable:   79564256 kB' 'Buffers:            8064 kB' 'Cached:         11151136 kB' 'SwapCached:            0 kB' 'Active:          7960776 kB' 'Inactive:        3690704 kB' 'Active(anon):    7572852 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        495040 kB' 'Mapped:           149316 kB' 'Shmem:           7080572 kB' 'KReclaimable:     195640 kB' 'Slab:             628156 kB' 'SReclaimable:     195640 kB' 'SUnreclaim:       432516 kB' 'KernelStack:       16144 kB' 'PageTables:         7572 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958436 kB' 'Committed_AS:    8758616 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199048 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.629    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.629    00:36:26	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:37.630    00:36:26	-- setup/common.sh@33 -- # echo 0
00:05:37.630    00:36:26	-- setup/common.sh@33 -- # return 0
00:05:37.630   00:36:26	-- setup/hugepages.sh@97 -- # anon=0
00:05:37.630    00:36:26	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:37.630    00:36:26	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:37.630    00:36:26	-- setup/common.sh@18 -- # local node=
00:05:37.630    00:36:26	-- setup/common.sh@19 -- # local var val
00:05:37.630    00:36:26	-- setup/common.sh@20 -- # local mem_f mem
00:05:37.630    00:36:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:37.630    00:36:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:37.630    00:36:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:37.630    00:36:26	-- setup/common.sh@28 -- # mapfile -t mem
00:05:37.630    00:36:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:37.630     00:36:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        76044296 kB' 'MemAvailable:   79567364 kB' 'Buffers:            8064 kB' 'Cached:         11151148 kB' 'SwapCached:            0 kB' 'Active:          7958728 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570804 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493408 kB' 'Mapped:           149204 kB' 'Shmem:           7080584 kB' 'KReclaimable:     195640 kB' 'Slab:             628164 kB' 'SReclaimable:     195640 kB' 'SUnreclaim:       432524 kB' 'KernelStack:       15952 kB' 'PageTables:         7244 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958436 kB' 'Committed_AS:    8758628 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.630    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.630    00:36:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.631    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.631    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.631    00:36:26	-- setup/common.sh@33 -- # echo 0
00:05:37.631    00:36:26	-- setup/common.sh@33 -- # return 0
00:05:37.631   00:36:26	-- setup/hugepages.sh@99 -- # surp=0
00:05:37.631    00:36:26	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:37.631    00:36:26	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:37.631    00:36:26	-- setup/common.sh@18 -- # local node=
00:05:37.631    00:36:26	-- setup/common.sh@19 -- # local var val
00:05:37.631    00:36:26	-- setup/common.sh@20 -- # local mem_f mem
00:05:37.894    00:36:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:37.894    00:36:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:37.894    00:36:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:37.894    00:36:26	-- setup/common.sh@28 -- # mapfile -t mem
00:05:37.894    00:36:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:37.895     00:36:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        76044980 kB' 'MemAvailable:   79568048 kB' 'Buffers:            8064 kB' 'Cached:         11151148 kB' 'SwapCached:            0 kB' 'Active:          7959064 kB' 'Inactive:        3690704 kB' 'Active(anon):    7571140 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493748 kB' 'Mapped:           149204 kB' 'Shmem:           7080584 kB' 'KReclaimable:     195640 kB' 'Slab:             628164 kB' 'SReclaimable:     195640 kB' 'SUnreclaim:       432524 kB' 'KernelStack:       15952 kB' 'PageTables:         7244 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958436 kB' 'Committed_AS:    8758644 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.895    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.895    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:37.896    00:36:26	-- setup/common.sh@33 -- # echo 0
00:05:37.896    00:36:26	-- setup/common.sh@33 -- # return 0
00:05:37.896   00:36:26	-- setup/hugepages.sh@100 -- # resv=0
00:05:37.896   00:36:26	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1536
00:05:37.896  nr_hugepages=1536
00:05:37.896   00:36:26	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:37.896  resv_hugepages=0
00:05:37.896   00:36:26	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:37.896  surplus_hugepages=0
00:05:37.896   00:36:26	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:37.896  anon_hugepages=0
00:05:37.896   00:36:26	-- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv ))
00:05:37.896   00:36:26	-- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages ))
00:05:37.896    00:36:26	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:37.896    00:36:26	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:37.896    00:36:26	-- setup/common.sh@18 -- # local node=
00:05:37.896    00:36:26	-- setup/common.sh@19 -- # local var val
00:05:37.896    00:36:26	-- setup/common.sh@20 -- # local mem_f mem
00:05:37.896    00:36:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:37.896    00:36:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:37.896    00:36:26	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:37.896    00:36:26	-- setup/common.sh@28 -- # mapfile -t mem
00:05:37.896    00:36:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896     00:36:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        76045236 kB' 'MemAvailable:   79568304 kB' 'Buffers:            8064 kB' 'Cached:         11151160 kB' 'SwapCached:            0 kB' 'Active:          7958708 kB' 'Inactive:        3690704 kB' 'Active(anon):    7570784 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493380 kB' 'Mapped:           149204 kB' 'Shmem:           7080596 kB' 'KReclaimable:     195640 kB' 'Slab:             628164 kB' 'SReclaimable:     195640 kB' 'SUnreclaim:       432524 kB' 'KernelStack:       15952 kB' 'PageTables:         7244 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958436 kB' 'Committed_AS:    8758656 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.896    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.896    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.897    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:37.897    00:36:26	-- setup/common.sh@33 -- # echo 1536
00:05:37.897    00:36:26	-- setup/common.sh@33 -- # return 0
00:05:37.897   00:36:26	-- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv ))
00:05:37.897   00:36:26	-- setup/hugepages.sh@112 -- # get_nodes
00:05:37.897   00:36:26	-- setup/hugepages.sh@27 -- # local node
00:05:37.897   00:36:26	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:37.897   00:36:26	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:05:37.897   00:36:26	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:37.897   00:36:26	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:37.897   00:36:26	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:37.897   00:36:26	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:37.897   00:36:26	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:37.897   00:36:26	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:37.897    00:36:26	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:37.897    00:36:26	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:37.897    00:36:26	-- setup/common.sh@18 -- # local node=0
00:05:37.897    00:36:26	-- setup/common.sh@19 -- # local var val
00:05:37.897    00:36:26	-- setup/common.sh@20 -- # local mem_f mem
00:05:37.897    00:36:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:37.897    00:36:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:37.897    00:36:26	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:37.897    00:36:26	-- setup/common.sh@28 -- # mapfile -t mem
00:05:37.897    00:36:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.897    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898     00:36:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43357116 kB' 'MemUsed:         4707732 kB' 'SwapCached:            0 kB' 'Active:          2728508 kB' 'Inactive:         117812 kB' 'Active(anon):    2463612 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448624 kB' 'Mapped:            97356 kB' 'AnonPages:        400868 kB' 'Shmem:           2065916 kB' 'KernelStack:        9752 kB' 'PageTables:         5208 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115520 kB' 'Slab:             370100 kB' 'SReclaimable:     115520 kB' 'SUnreclaim:       254580 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.898    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.898    00:36:26	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.898    00:36:26	-- setup/common.sh@33 -- # echo 0
00:05:37.898    00:36:26	-- setup/common.sh@33 -- # return 0
00:05:37.898   00:36:26	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:37.898   00:36:26	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:37.898   00:36:26	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:37.898    00:36:26	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:05:37.898    00:36:26	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:37.899    00:36:26	-- setup/common.sh@18 -- # local node=1
00:05:37.899    00:36:26	-- setup/common.sh@19 -- # local var val
00:05:37.899    00:36:26	-- setup/common.sh@20 -- # local mem_f mem
00:05:37.899    00:36:26	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:37.899    00:36:26	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:05:37.899    00:36:26	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:05:37.899    00:36:26	-- setup/common.sh@28 -- # mapfile -t mem
00:05:37.899    00:36:26	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899     00:36:26	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220548 kB' 'MemFree:        32688996 kB' 'MemUsed:        11531552 kB' 'SwapCached:            0 kB' 'Active:          5230300 kB' 'Inactive:        3572892 kB' 'Active(anon):    5107272 kB' 'Inactive(anon):        0 kB' 'Active(file):     123028 kB' 'Inactive(file):  3572892 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8710640 kB' 'Mapped:            51848 kB' 'AnonPages:         92556 kB' 'Shmem:           5014720 kB' 'KernelStack:        6200 kB' 'PageTables:         2036 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80120 kB' 'Slab:             258064 kB' 'SReclaimable:      80120 kB' 'SUnreclaim:       177944 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:26	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:26	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.899    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.899    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # continue
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # IFS=': '
00:05:37.900    00:36:27	-- setup/common.sh@31 -- # read -r var val _
00:05:37.900    00:36:27	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:37.900    00:36:27	-- setup/common.sh@33 -- # echo 0
00:05:37.900    00:36:27	-- setup/common.sh@33 -- # return 0
00:05:37.900   00:36:27	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:37.900   00:36:27	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:37.900   00:36:27	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:37.900   00:36:27	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:37.900   00:36:27	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:05:37.900  node0=512 expecting 512
00:05:37.900   00:36:27	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:37.900   00:36:27	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:37.900   00:36:27	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:37.900   00:36:27	-- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024'
00:05:37.900  node1=1024 expecting 1024
00:05:37.900   00:36:27	-- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]]
00:05:37.900  
00:05:37.900  real	0m3.523s
00:05:37.900  user	0m1.367s
00:05:37.900  sys	0m2.248s
00:05:37.900   00:36:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:37.900   00:36:27	-- common/autotest_common.sh@10 -- # set +x
00:05:37.900  ************************************
00:05:37.900  END TEST custom_alloc
00:05:37.900  ************************************
00:05:37.900   00:36:27	-- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc
00:05:37.900   00:36:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:37.900   00:36:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:37.900   00:36:27	-- common/autotest_common.sh@10 -- # set +x
00:05:37.900  ************************************
00:05:37.900  START TEST no_shrink_alloc
00:05:37.900  ************************************
00:05:37.900   00:36:27	-- common/autotest_common.sh@1114 -- # no_shrink_alloc
00:05:37.900   00:36:27	-- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0
00:05:37.900   00:36:27	-- setup/hugepages.sh@49 -- # local size=2097152
00:05:37.900   00:36:27	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:05:37.900   00:36:27	-- setup/hugepages.sh@51 -- # shift
00:05:37.900   00:36:27	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:05:37.900   00:36:27	-- setup/hugepages.sh@52 -- # local node_ids
00:05:37.900   00:36:27	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:05:37.900   00:36:27	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:05:37.900   00:36:27	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:05:37.900   00:36:27	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:05:37.900   00:36:27	-- setup/hugepages.sh@62 -- # local user_nodes
00:05:37.900   00:36:27	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:05:37.900   00:36:27	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:05:37.900   00:36:27	-- setup/hugepages.sh@67 -- # nodes_test=()
00:05:37.900   00:36:27	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:05:37.900   00:36:27	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:05:37.900   00:36:27	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:05:37.900   00:36:27	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:05:37.900   00:36:27	-- setup/hugepages.sh@73 -- # return 0
00:05:37.900   00:36:27	-- setup/hugepages.sh@198 -- # setup output
00:05:37.900   00:36:27	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:37.900   00:36:27	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:41.203  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:41.203  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:05:41.203  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:05:41.203   00:36:30	-- setup/hugepages.sh@199 -- # verify_nr_hugepages
00:05:41.203   00:36:30	-- setup/hugepages.sh@89 -- # local node
00:05:41.203   00:36:30	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:41.203   00:36:30	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:41.203   00:36:30	-- setup/hugepages.sh@92 -- # local surp
00:05:41.203   00:36:30	-- setup/hugepages.sh@93 -- # local resv
00:05:41.203   00:36:30	-- setup/hugepages.sh@94 -- # local anon
00:05:41.203   00:36:30	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:41.203    00:36:30	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:41.203    00:36:30	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:41.203    00:36:30	-- setup/common.sh@18 -- # local node=
00:05:41.203    00:36:30	-- setup/common.sh@19 -- # local var val
00:05:41.203    00:36:30	-- setup/common.sh@20 -- # local mem_f mem
00:05:41.203    00:36:30	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:41.203    00:36:30	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:41.203    00:36:30	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:41.203    00:36:30	-- setup/common.sh@28 -- # mapfile -t mem
00:05:41.203    00:36:30	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:41.203    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.203    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.203     00:36:30	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77041532 kB' 'MemAvailable:   80564568 kB' 'Buffers:            8064 kB' 'Cached:         11151252 kB' 'SwapCached:            0 kB' 'Active:          7962232 kB' 'Inactive:        3690704 kB' 'Active(anon):    7574308 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        496852 kB' 'Mapped:           149224 kB' 'Shmem:           7080688 kB' 'KReclaimable:     195576 kB' 'Slab:             627532 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       431956 kB' 'KernelStack:       16320 kB' 'PageTables:         8736 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8763316 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199192 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:41.203    00:36:30	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.203    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.203    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.203    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.203    00:36:30	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.204    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.204    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:41.205    00:36:30	-- setup/common.sh@33 -- # echo 0
00:05:41.205    00:36:30	-- setup/common.sh@33 -- # return 0
00:05:41.205   00:36:30	-- setup/hugepages.sh@97 -- # anon=0
00:05:41.205    00:36:30	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:41.205    00:36:30	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:41.205    00:36:30	-- setup/common.sh@18 -- # local node=
00:05:41.205    00:36:30	-- setup/common.sh@19 -- # local var val
00:05:41.205    00:36:30	-- setup/common.sh@20 -- # local mem_f mem
00:05:41.205    00:36:30	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:41.205    00:36:30	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:41.205    00:36:30	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:41.205    00:36:30	-- setup/common.sh@28 -- # mapfile -t mem
00:05:41.205    00:36:30	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205     00:36:30	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77045624 kB' 'MemAvailable:   80568660 kB' 'Buffers:            8064 kB' 'Cached:         11151256 kB' 'SwapCached:            0 kB' 'Active:          7962324 kB' 'Inactive:        3690704 kB' 'Active(anon):    7574400 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        496920 kB' 'Mapped:           149208 kB' 'Shmem:           7080692 kB' 'KReclaimable:     195576 kB' 'Slab:             627812 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       432236 kB' 'KernelStack:       16160 kB' 'PageTables:         8476 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8763328 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199176 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.205    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.205    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.206    00:36:30	-- setup/common.sh@33 -- # echo 0
00:05:41.206    00:36:30	-- setup/common.sh@33 -- # return 0
00:05:41.206   00:36:30	-- setup/hugepages.sh@99 -- # surp=0
00:05:41.206    00:36:30	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:41.206    00:36:30	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:41.206    00:36:30	-- setup/common.sh@18 -- # local node=
00:05:41.206    00:36:30	-- setup/common.sh@19 -- # local var val
00:05:41.206    00:36:30	-- setup/common.sh@20 -- # local mem_f mem
00:05:41.206    00:36:30	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:41.206    00:36:30	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:41.206    00:36:30	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:41.206    00:36:30	-- setup/common.sh@28 -- # mapfile -t mem
00:05:41.206    00:36:30	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206     00:36:30	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77045520 kB' 'MemAvailable:   80568556 kB' 'Buffers:            8064 kB' 'Cached:         11151268 kB' 'SwapCached:            0 kB' 'Active:          7962632 kB' 'Inactive:        3690704 kB' 'Active(anon):    7574708 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        497160 kB' 'Mapped:           149208 kB' 'Shmem:           7080704 kB' 'KReclaimable:     195576 kB' 'Slab:             627812 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       432236 kB' 'KernelStack:       16144 kB' 'PageTables:         8300 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8759936 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199192 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.206    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.206    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.469    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.469    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.470    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:41.470    00:36:30	-- setup/common.sh@33 -- # echo 0
00:05:41.470    00:36:30	-- setup/common.sh@33 -- # return 0
00:05:41.470   00:36:30	-- setup/hugepages.sh@100 -- # resv=0
00:05:41.470   00:36:30	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:41.470  nr_hugepages=1024
00:05:41.470   00:36:30	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:41.470  resv_hugepages=0
00:05:41.470   00:36:30	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:41.470  surplus_hugepages=0
00:05:41.470   00:36:30	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:41.470  anon_hugepages=0
00:05:41.470   00:36:30	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:41.470   00:36:30	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:41.470    00:36:30	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:41.470    00:36:30	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:41.470    00:36:30	-- setup/common.sh@18 -- # local node=
00:05:41.470    00:36:30	-- setup/common.sh@19 -- # local var val
00:05:41.470    00:36:30	-- setup/common.sh@20 -- # local mem_f mem
00:05:41.470    00:36:30	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:41.470    00:36:30	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:41.470    00:36:30	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:41.470    00:36:30	-- setup/common.sh@28 -- # mapfile -t mem
00:05:41.470    00:36:30	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.470    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471     00:36:30	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77044884 kB' 'MemAvailable:   80567920 kB' 'Buffers:            8064 kB' 'Cached:         11151280 kB' 'SwapCached:            0 kB' 'Active:          7961392 kB' 'Inactive:        3690704 kB' 'Active(anon):    7573468 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        495972 kB' 'Mapped:           149204 kB' 'Shmem:           7080716 kB' 'KReclaimable:     195576 kB' 'Slab:             627844 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       432268 kB' 'KernelStack:       15952 kB' 'PageTables:         7920 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8759168 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199080 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.471    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.471    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:41.472    00:36:30	-- setup/common.sh@33 -- # echo 1024
00:05:41.472    00:36:30	-- setup/common.sh@33 -- # return 0
00:05:41.472   00:36:30	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:41.472   00:36:30	-- setup/hugepages.sh@112 -- # get_nodes
00:05:41.472   00:36:30	-- setup/hugepages.sh@27 -- # local node
00:05:41.472   00:36:30	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:41.472   00:36:30	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:41.472   00:36:30	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:41.472   00:36:30	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:05:41.472   00:36:30	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:41.472   00:36:30	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:41.472   00:36:30	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:41.472   00:36:30	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:41.472    00:36:30	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:41.472    00:36:30	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:41.472    00:36:30	-- setup/common.sh@18 -- # local node=0
00:05:41.472    00:36:30	-- setup/common.sh@19 -- # local var val
00:05:41.472    00:36:30	-- setup/common.sh@20 -- # local mem_f mem
00:05:41.472    00:36:30	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:41.472    00:36:30	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:41.472    00:36:30	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:41.472    00:36:30	-- setup/common.sh@28 -- # mapfile -t mem
00:05:41.472    00:36:30	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472     00:36:30	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        42326292 kB' 'MemUsed:         5738556 kB' 'SwapCached:            0 kB' 'Active:          2730448 kB' 'Inactive:         117812 kB' 'Active(anon):    2465552 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448660 kB' 'Mapped:            97364 kB' 'AnonPages:        402780 kB' 'Shmem:           2065952 kB' 'KernelStack:        9800 kB' 'PageTables:         5964 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115456 kB' 'Slab:             369728 kB' 'SReclaimable:     115456 kB' 'SUnreclaim:       254272 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.472    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.472    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # continue
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # IFS=': '
00:05:41.473    00:36:30	-- setup/common.sh@31 -- # read -r var val _
00:05:41.473    00:36:30	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:41.473    00:36:30	-- setup/common.sh@33 -- # echo 0
00:05:41.473    00:36:30	-- setup/common.sh@33 -- # return 0
00:05:41.473   00:36:30	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:41.473   00:36:30	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:41.473   00:36:30	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:41.473   00:36:30	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:41.473   00:36:30	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:41.473  node0=1024 expecting 1024
00:05:41.473   00:36:30	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:41.473   00:36:30	-- setup/hugepages.sh@202 -- # CLEAR_HUGE=no
00:05:41.473   00:36:30	-- setup/hugepages.sh@202 -- # NRHUGE=512
00:05:41.473   00:36:30	-- setup/hugepages.sh@202 -- # setup output
00:05:41.473   00:36:30	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:41.473   00:36:30	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:44.773  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:05:44.773  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:05:44.773  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:05:44.773  INFO: Requested 512 hugepages but 1024 already allocated on node0
00:05:44.773   00:36:33	-- setup/hugepages.sh@204 -- # verify_nr_hugepages
00:05:44.773   00:36:33	-- setup/hugepages.sh@89 -- # local node
00:05:44.773   00:36:33	-- setup/hugepages.sh@90 -- # local sorted_t
00:05:44.773   00:36:33	-- setup/hugepages.sh@91 -- # local sorted_s
00:05:44.773   00:36:33	-- setup/hugepages.sh@92 -- # local surp
00:05:44.773   00:36:33	-- setup/hugepages.sh@93 -- # local resv
00:05:44.773   00:36:33	-- setup/hugepages.sh@94 -- # local anon
00:05:44.773   00:36:33	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:05:44.773    00:36:33	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:05:44.773    00:36:33	-- setup/common.sh@17 -- # local get=AnonHugePages
00:05:44.773    00:36:33	-- setup/common.sh@18 -- # local node=
00:05:44.773    00:36:33	-- setup/common.sh@19 -- # local var val
00:05:44.773    00:36:33	-- setup/common.sh@20 -- # local mem_f mem
00:05:44.773    00:36:33	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:44.773    00:36:33	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:44.773    00:36:33	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:44.773    00:36:33	-- setup/common.sh@28 -- # mapfile -t mem
00:05:44.773    00:36:33	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773     00:36:33	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77054424 kB' 'MemAvailable:   80577460 kB' 'Buffers:            8064 kB' 'Cached:         11151352 kB' 'SwapCached:            0 kB' 'Active:          7961892 kB' 'Inactive:        3690704 kB' 'Active(anon):    7573968 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        496416 kB' 'Mapped:           149760 kB' 'Shmem:           7080788 kB' 'KReclaimable:     195576 kB' 'Slab:             627552 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       431976 kB' 'KernelStack:       16048 kB' 'PageTables:         7544 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8762300 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199032 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.773    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.773    00:36:33	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:05:44.774    00:36:33	-- setup/common.sh@33 -- # echo 0
00:05:44.774    00:36:33	-- setup/common.sh@33 -- # return 0
00:05:44.774   00:36:33	-- setup/hugepages.sh@97 -- # anon=0
00:05:44.774    00:36:33	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:05:44.774    00:36:33	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:44.774    00:36:33	-- setup/common.sh@18 -- # local node=
00:05:44.774    00:36:33	-- setup/common.sh@19 -- # local var val
00:05:44.774    00:36:33	-- setup/common.sh@20 -- # local mem_f mem
00:05:44.774    00:36:33	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:44.774    00:36:33	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:44.774    00:36:33	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:44.774    00:36:33	-- setup/common.sh@28 -- # mapfile -t mem
00:05:44.774    00:36:33	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774     00:36:33	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77049388 kB' 'MemAvailable:   80572424 kB' 'Buffers:            8064 kB' 'Cached:         11151356 kB' 'SwapCached:            0 kB' 'Active:          7965396 kB' 'Inactive:        3690704 kB' 'Active(anon):    7577472 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        500024 kB' 'Mapped:           150132 kB' 'Shmem:           7080792 kB' 'KReclaimable:     195576 kB' 'Slab:             627552 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       431976 kB' 'KernelStack:       16080 kB' 'PageTables:         7656 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8765624 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199032 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.774    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.774    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.775    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.775    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.775    00:36:33	-- setup/common.sh@33 -- # echo 0
00:05:44.776    00:36:33	-- setup/common.sh@33 -- # return 0
00:05:44.776   00:36:33	-- setup/hugepages.sh@99 -- # surp=0
00:05:44.776    00:36:33	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:05:44.776    00:36:33	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:05:44.776    00:36:33	-- setup/common.sh@18 -- # local node=
00:05:44.776    00:36:33	-- setup/common.sh@19 -- # local var val
00:05:44.776    00:36:33	-- setup/common.sh@20 -- # local mem_f mem
00:05:44.776    00:36:33	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:44.776    00:36:33	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:44.776    00:36:33	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:44.776    00:36:33	-- setup/common.sh@28 -- # mapfile -t mem
00:05:44.776    00:36:33	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776     00:36:33	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77056020 kB' 'MemAvailable:   80579056 kB' 'Buffers:            8064 kB' 'Cached:         11151368 kB' 'SwapCached:            0 kB' 'Active:          7959628 kB' 'Inactive:        3690704 kB' 'Active(anon):    7571704 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        494132 kB' 'Mapped:           149216 kB' 'Shmem:           7080804 kB' 'KReclaimable:     195576 kB' 'Slab:             627596 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       432020 kB' 'KernelStack:       15968 kB' 'PageTables:         7284 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8759520 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199032 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.776    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.776    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:05:44.777    00:36:33	-- setup/common.sh@33 -- # echo 0
00:05:44.777    00:36:33	-- setup/common.sh@33 -- # return 0
00:05:44.777   00:36:33	-- setup/hugepages.sh@100 -- # resv=0
00:05:44.777   00:36:33	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:05:44.777  nr_hugepages=1024
00:05:44.777   00:36:33	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:05:44.777  resv_hugepages=0
00:05:44.777   00:36:33	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:05:44.777  surplus_hugepages=0
00:05:44.777   00:36:33	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:05:44.777  anon_hugepages=0
00:05:44.777   00:36:33	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:44.777   00:36:33	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:05:44.777    00:36:33	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:05:44.777    00:36:33	-- setup/common.sh@17 -- # local get=HugePages_Total
00:05:44.777    00:36:33	-- setup/common.sh@18 -- # local node=
00:05:44.777    00:36:33	-- setup/common.sh@19 -- # local var val
00:05:44.777    00:36:33	-- setup/common.sh@20 -- # local mem_f mem
00:05:44.777    00:36:33	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:44.777    00:36:33	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:05:44.777    00:36:33	-- setup/common.sh@25 -- # [[ -n '' ]]
00:05:44.777    00:36:33	-- setup/common.sh@28 -- # mapfile -t mem
00:05:44.777    00:36:33	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777     00:36:33	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285396 kB' 'MemFree:        77057424 kB' 'MemAvailable:   80580460 kB' 'Buffers:            8064 kB' 'Cached:         11151392 kB' 'SwapCached:            0 kB' 'Active:          7959268 kB' 'Inactive:        3690704 kB' 'Active(anon):    7571344 kB' 'Inactive(anon):        0 kB' 'Active(file):     387924 kB' 'Inactive(file):  3690704 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        493744 kB' 'Mapped:           149216 kB' 'Shmem:           7080828 kB' 'KReclaimable:     195576 kB' 'Slab:             627596 kB' 'SReclaimable:     195576 kB' 'SUnreclaim:       432020 kB' 'KernelStack:       15952 kB' 'PageTables:         7236 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482724 kB' 'Committed_AS:    8759532 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199032 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      441768 kB' 'DirectMap2M:     8671232 kB' 'DirectMap1G:    93323264 kB'
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.777    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.777    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.778    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.778    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:05:44.778    00:36:33	-- setup/common.sh@33 -- # echo 1024
00:05:44.778    00:36:33	-- setup/common.sh@33 -- # return 0
00:05:44.778   00:36:33	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:05:44.778   00:36:33	-- setup/hugepages.sh@112 -- # get_nodes
00:05:44.779   00:36:33	-- setup/hugepages.sh@27 -- # local node
00:05:44.779   00:36:33	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:44.779   00:36:33	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:05:44.779   00:36:33	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:05:44.779   00:36:33	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:05:44.779   00:36:33	-- setup/hugepages.sh@32 -- # no_nodes=2
00:05:44.779   00:36:33	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:05:44.779   00:36:33	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:05:44.779   00:36:33	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:05:44.779    00:36:33	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:05:44.779    00:36:33	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:05:44.779    00:36:33	-- setup/common.sh@18 -- # local node=0
00:05:44.779    00:36:33	-- setup/common.sh@19 -- # local var val
00:05:44.779    00:36:33	-- setup/common.sh@20 -- # local mem_f mem
00:05:44.779    00:36:33	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:05:44.779    00:36:33	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:05:44.779    00:36:33	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:05:44.779    00:36:33	-- setup/common.sh@28 -- # mapfile -t mem
00:05:44.779    00:36:33	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779     00:36:33	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        42327404 kB' 'MemUsed:         5737444 kB' 'SwapCached:            0 kB' 'Active:          2728860 kB' 'Inactive:         117812 kB' 'Active(anon):    2463964 kB' 'Inactive(anon):        0 kB' 'Active(file):     264896 kB' 'Inactive(file):   117812 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       2448692 kB' 'Mapped:            97356 kB' 'AnonPages:        401132 kB' 'Shmem:           2065984 kB' 'KernelStack:        9736 kB' 'PageTables:         5208 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:     115456 kB' 'Slab:             369692 kB' 'SReclaimable:     115456 kB' 'SUnreclaim:       254236 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.779    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.779    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # continue
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # IFS=': '
00:05:44.780    00:36:33	-- setup/common.sh@31 -- # read -r var val _
00:05:44.780    00:36:33	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:05:44.780    00:36:33	-- setup/common.sh@33 -- # echo 0
00:05:44.780    00:36:33	-- setup/common.sh@33 -- # return 0
00:05:44.780   00:36:33	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:05:44.780   00:36:33	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:05:44.780   00:36:33	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:05:44.780   00:36:33	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:05:44.780   00:36:33	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:05:44.780  node0=1024 expecting 1024
00:05:44.780   00:36:33	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:05:44.780  
00:05:44.780  real	0m6.915s
00:05:44.780  user	0m2.663s
00:05:44.780  sys	0m4.446s
00:05:44.780   00:36:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:44.780   00:36:33	-- common/autotest_common.sh@10 -- # set +x
00:05:44.780  ************************************
00:05:44.780  END TEST no_shrink_alloc
00:05:44.780  ************************************
00:05:44.780   00:36:34	-- setup/hugepages.sh@217 -- # clear_hp
00:05:44.780   00:36:34	-- setup/hugepages.sh@37 -- # local node hp
00:05:44.780   00:36:34	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:05:44.780   00:36:34	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:44.780   00:36:34	-- setup/hugepages.sh@41 -- # echo 0
00:05:44.780   00:36:34	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:44.780   00:36:34	-- setup/hugepages.sh@41 -- # echo 0
00:05:44.780   00:36:34	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:05:44.780   00:36:34	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:44.780   00:36:34	-- setup/hugepages.sh@41 -- # echo 0
00:05:44.780   00:36:34	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:05:44.780   00:36:34	-- setup/hugepages.sh@41 -- # echo 0
00:05:44.780   00:36:34	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:05:44.780   00:36:34	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:05:44.780  
00:05:44.780  real	0m27.971s
00:05:44.780  user	0m9.547s
00:05:44.780  sys	0m16.017s
00:05:45.040   00:36:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:45.040   00:36:34	-- common/autotest_common.sh@10 -- # set +x
00:05:45.040  ************************************
00:05:45.040  END TEST hugepages
00:05:45.040  ************************************
00:05:45.040   00:36:34	-- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/driver.sh
00:05:45.040   00:36:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:45.040   00:36:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:45.040   00:36:34	-- common/autotest_common.sh@10 -- # set +x
00:05:45.040  ************************************
00:05:45.040  START TEST driver
00:05:45.040  ************************************
00:05:45.040   00:36:34	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/driver.sh
00:05:45.040  * Looking for test storage...
00:05:45.040  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:05:45.040     00:36:34	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:45.040      00:36:34	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:45.040      00:36:34	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:45.040     00:36:34	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:45.040     00:36:34	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:45.040     00:36:34	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:45.040     00:36:34	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:45.040     00:36:34	-- scripts/common.sh@335 -- # IFS=.-:
00:05:45.040     00:36:34	-- scripts/common.sh@335 -- # read -ra ver1
00:05:45.040     00:36:34	-- scripts/common.sh@336 -- # IFS=.-:
00:05:45.040     00:36:34	-- scripts/common.sh@336 -- # read -ra ver2
00:05:45.040     00:36:34	-- scripts/common.sh@337 -- # local 'op=<'
00:05:45.040     00:36:34	-- scripts/common.sh@339 -- # ver1_l=2
00:05:45.040     00:36:34	-- scripts/common.sh@340 -- # ver2_l=1
00:05:45.040     00:36:34	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:45.040     00:36:34	-- scripts/common.sh@343 -- # case "$op" in
00:05:45.040     00:36:34	-- scripts/common.sh@344 -- # : 1
00:05:45.040     00:36:34	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:45.040     00:36:34	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:45.040      00:36:34	-- scripts/common.sh@364 -- # decimal 1
00:05:45.040      00:36:34	-- scripts/common.sh@352 -- # local d=1
00:05:45.040      00:36:34	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:45.040      00:36:34	-- scripts/common.sh@354 -- # echo 1
00:05:45.040     00:36:34	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:45.040      00:36:34	-- scripts/common.sh@365 -- # decimal 2
00:05:45.040      00:36:34	-- scripts/common.sh@352 -- # local d=2
00:05:45.040      00:36:34	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:45.040      00:36:34	-- scripts/common.sh@354 -- # echo 2
00:05:45.040     00:36:34	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:45.040     00:36:34	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:45.040     00:36:34	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:45.040     00:36:34	-- scripts/common.sh@367 -- # return 0
00:05:45.040     00:36:34	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:45.040     00:36:34	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:45.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.040  		--rc genhtml_branch_coverage=1
00:05:45.040  		--rc genhtml_function_coverage=1
00:05:45.040  		--rc genhtml_legend=1
00:05:45.040  		--rc geninfo_all_blocks=1
00:05:45.040  		--rc geninfo_unexecuted_blocks=1
00:05:45.040  		
00:05:45.040  		'
00:05:45.040     00:36:34	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:45.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.040  		--rc genhtml_branch_coverage=1
00:05:45.040  		--rc genhtml_function_coverage=1
00:05:45.040  		--rc genhtml_legend=1
00:05:45.040  		--rc geninfo_all_blocks=1
00:05:45.040  		--rc geninfo_unexecuted_blocks=1
00:05:45.040  		
00:05:45.040  		'
00:05:45.040     00:36:34	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:45.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.040  		--rc genhtml_branch_coverage=1
00:05:45.040  		--rc genhtml_function_coverage=1
00:05:45.040  		--rc genhtml_legend=1
00:05:45.040  		--rc geninfo_all_blocks=1
00:05:45.040  		--rc geninfo_unexecuted_blocks=1
00:05:45.040  		
00:05:45.040  		'
00:05:45.040     00:36:34	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:45.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.040  		--rc genhtml_branch_coverage=1
00:05:45.040  		--rc genhtml_function_coverage=1
00:05:45.040  		--rc genhtml_legend=1
00:05:45.040  		--rc geninfo_all_blocks=1
00:05:45.040  		--rc geninfo_unexecuted_blocks=1
00:05:45.040  		
00:05:45.040  		'
00:05:45.040   00:36:34	-- setup/driver.sh@68 -- # setup reset
00:05:45.040   00:36:34	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:45.040   00:36:34	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:05:50.323   00:36:38	-- setup/driver.sh@69 -- # run_test guess_driver guess_driver
00:05:50.323   00:36:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:50.323   00:36:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:50.323   00:36:38	-- common/autotest_common.sh@10 -- # set +x
00:05:50.323  ************************************
00:05:50.323  START TEST guess_driver
00:05:50.323  ************************************
00:05:50.323   00:36:38	-- common/autotest_common.sh@1114 -- # guess_driver
00:05:50.323   00:36:38	-- setup/driver.sh@46 -- # local driver setup_driver marker
00:05:50.323   00:36:38	-- setup/driver.sh@47 -- # local fail=0
00:05:50.323    00:36:38	-- setup/driver.sh@49 -- # pick_driver
00:05:50.323    00:36:38	-- setup/driver.sh@36 -- # vfio
00:05:50.323    00:36:38	-- setup/driver.sh@21 -- # local iommu_grups
00:05:50.323    00:36:38	-- setup/driver.sh@22 -- # local unsafe_vfio
00:05:50.323    00:36:38	-- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]]
00:05:50.323    00:36:38	-- setup/driver.sh@25 -- # unsafe_vfio=N
00:05:50.323    00:36:38	-- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*)
00:05:50.323    00:36:38	-- setup/driver.sh@29 -- # (( 162 > 0 ))
00:05:50.323    00:36:38	-- setup/driver.sh@30 -- # is_driver vfio_pci
00:05:50.323    00:36:38	-- setup/driver.sh@14 -- # mod vfio_pci
00:05:50.323     00:36:38	-- setup/driver.sh@12 -- # dep vfio_pci
00:05:50.323     00:36:38	-- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci
00:05:50.323    00:36:38	-- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 
00:05:50.323  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz  == *\.\k\o* ]]
00:05:50.323    00:36:38	-- setup/driver.sh@30 -- # return 0
00:05:50.323    00:36:38	-- setup/driver.sh@37 -- # echo vfio-pci
00:05:50.323   00:36:38	-- setup/driver.sh@49 -- # driver=vfio-pci
00:05:50.323   00:36:38	-- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]]
00:05:50.323   00:36:38	-- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci'
00:05:50.323  Looking for driver=vfio-pci
00:05:50.323   00:36:38	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:50.323    00:36:38	-- setup/driver.sh@45 -- # setup output config
00:05:50.323    00:36:38	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:50.323    00:36:38	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:52.890   00:36:41	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:52.890   00:36:41	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:52.890   00:36:41	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:56.183   00:36:45	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:05:56.183   00:36:45	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:05:56.183   00:36:45	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:05:56.183   00:36:45	-- setup/driver.sh@64 -- # (( fail == 0 ))
00:05:56.183   00:36:45	-- setup/driver.sh@65 -- # setup reset
00:05:56.183   00:36:45	-- setup/common.sh@9 -- # [[ reset == output ]]
00:05:56.183   00:36:45	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:06:01.518  
00:06:01.518  real	0m11.052s
00:06:01.518  user	0m2.407s
00:06:01.518  sys	0m4.779s
00:06:01.518   00:36:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:01.518   00:36:49	-- common/autotest_common.sh@10 -- # set +x
00:06:01.518  ************************************
00:06:01.518  END TEST guess_driver
00:06:01.518  ************************************
00:06:01.518  
00:06:01.518  real	0m15.737s
00:06:01.518  user	0m3.699s
00:06:01.518  sys	0m7.352s
00:06:01.518   00:36:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:01.518   00:36:49	-- common/autotest_common.sh@10 -- # set +x
00:06:01.518  ************************************
00:06:01.518  END TEST driver
00:06:01.518  ************************************
00:06:01.518   00:36:49	-- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/devices.sh
00:06:01.518   00:36:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:01.518   00:36:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:01.518   00:36:49	-- common/autotest_common.sh@10 -- # set +x
00:06:01.518  ************************************
00:06:01.518  START TEST devices
00:06:01.518  ************************************
00:06:01.518   00:36:49	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/devices.sh
00:06:01.518  * Looking for test storage...
00:06:01.518  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:06:01.518     00:36:49	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:01.518      00:36:49	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:01.518      00:36:49	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:01.518     00:36:50	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:01.518     00:36:50	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:01.518     00:36:50	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:01.518     00:36:50	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:01.518     00:36:50	-- scripts/common.sh@335 -- # IFS=.-:
00:06:01.518     00:36:50	-- scripts/common.sh@335 -- # read -ra ver1
00:06:01.518     00:36:50	-- scripts/common.sh@336 -- # IFS=.-:
00:06:01.518     00:36:50	-- scripts/common.sh@336 -- # read -ra ver2
00:06:01.518     00:36:50	-- scripts/common.sh@337 -- # local 'op=<'
00:06:01.519     00:36:50	-- scripts/common.sh@339 -- # ver1_l=2
00:06:01.519     00:36:50	-- scripts/common.sh@340 -- # ver2_l=1
00:06:01.519     00:36:50	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:01.519     00:36:50	-- scripts/common.sh@343 -- # case "$op" in
00:06:01.519     00:36:50	-- scripts/common.sh@344 -- # : 1
00:06:01.519     00:36:50	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:01.519     00:36:50	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:01.519      00:36:50	-- scripts/common.sh@364 -- # decimal 1
00:06:01.519      00:36:50	-- scripts/common.sh@352 -- # local d=1
00:06:01.519      00:36:50	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:01.519      00:36:50	-- scripts/common.sh@354 -- # echo 1
00:06:01.519     00:36:50	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:01.519      00:36:50	-- scripts/common.sh@365 -- # decimal 2
00:06:01.519      00:36:50	-- scripts/common.sh@352 -- # local d=2
00:06:01.519      00:36:50	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:01.519      00:36:50	-- scripts/common.sh@354 -- # echo 2
00:06:01.519     00:36:50	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:01.519     00:36:50	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:01.519     00:36:50	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:01.519     00:36:50	-- scripts/common.sh@367 -- # return 0
00:06:01.519     00:36:50	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:01.519     00:36:50	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:01.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.519  		--rc genhtml_branch_coverage=1
00:06:01.519  		--rc genhtml_function_coverage=1
00:06:01.519  		--rc genhtml_legend=1
00:06:01.519  		--rc geninfo_all_blocks=1
00:06:01.519  		--rc geninfo_unexecuted_blocks=1
00:06:01.519  		
00:06:01.519  		'
00:06:01.519     00:36:50	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:01.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.519  		--rc genhtml_branch_coverage=1
00:06:01.519  		--rc genhtml_function_coverage=1
00:06:01.519  		--rc genhtml_legend=1
00:06:01.519  		--rc geninfo_all_blocks=1
00:06:01.519  		--rc geninfo_unexecuted_blocks=1
00:06:01.519  		
00:06:01.519  		'
00:06:01.519     00:36:50	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:01.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.519  		--rc genhtml_branch_coverage=1
00:06:01.519  		--rc genhtml_function_coverage=1
00:06:01.519  		--rc genhtml_legend=1
00:06:01.519  		--rc geninfo_all_blocks=1
00:06:01.519  		--rc geninfo_unexecuted_blocks=1
00:06:01.519  		
00:06:01.519  		'
00:06:01.519     00:36:50	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:01.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.519  		--rc genhtml_branch_coverage=1
00:06:01.519  		--rc genhtml_function_coverage=1
00:06:01.519  		--rc genhtml_legend=1
00:06:01.519  		--rc geninfo_all_blocks=1
00:06:01.519  		--rc geninfo_unexecuted_blocks=1
00:06:01.519  		
00:06:01.519  		'
00:06:01.519   00:36:50	-- setup/devices.sh@190 -- # trap cleanup EXIT
00:06:01.519   00:36:50	-- setup/devices.sh@192 -- # setup reset
00:06:01.519   00:36:50	-- setup/common.sh@9 -- # [[ reset == output ]]
00:06:01.519   00:36:50	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:06:04.813   00:36:53	-- setup/devices.sh@194 -- # get_zoned_devs
00:06:04.813   00:36:53	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:06:04.813   00:36:53	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:06:04.813   00:36:53	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:06:04.813   00:36:53	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:06:04.813   00:36:53	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:06:04.813   00:36:53	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:06:04.813   00:36:53	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:04.813   00:36:53	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:06:04.813   00:36:53	-- setup/devices.sh@196 -- # blocks=()
00:06:04.813   00:36:53	-- setup/devices.sh@196 -- # declare -a blocks
00:06:04.813   00:36:53	-- setup/devices.sh@197 -- # blocks_to_pci=()
00:06:04.813   00:36:53	-- setup/devices.sh@197 -- # declare -A blocks_to_pci
00:06:04.813   00:36:53	-- setup/devices.sh@198 -- # min_disk_size=3221225472
00:06:04.813   00:36:53	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:06:04.813   00:36:53	-- setup/devices.sh@201 -- # ctrl=nvme0n1
00:06:04.813   00:36:53	-- setup/devices.sh@201 -- # ctrl=nvme0
00:06:04.813   00:36:53	-- setup/devices.sh@202 -- # pci=0000:5e:00.0
00:06:04.813   00:36:53	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]]
00:06:04.813   00:36:53	-- setup/devices.sh@204 -- # block_in_use nvme0n1
00:06:04.813   00:36:53	-- scripts/common.sh@380 -- # local block=nvme0n1 pt
00:06:04.813   00:36:53	-- scripts/common.sh@389 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:06:04.813  No valid GPT data, bailing
00:06:04.813    00:36:53	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:04.813   00:36:53	-- scripts/common.sh@393 -- # pt=
00:06:04.813   00:36:53	-- scripts/common.sh@394 -- # return 1
00:06:04.813    00:36:53	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1
00:06:04.813    00:36:53	-- setup/common.sh@76 -- # local dev=nvme0n1
00:06:04.813    00:36:53	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:06:04.813    00:36:53	-- setup/common.sh@80 -- # echo 4000787030016
00:06:04.813   00:36:53	-- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size ))
00:06:04.813   00:36:53	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:06:04.813   00:36:53	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0
00:06:04.813   00:36:53	-- setup/devices.sh@209 -- # (( 1 > 0 ))
00:06:04.813   00:36:53	-- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1
00:06:04.813   00:36:53	-- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount
00:06:04.813   00:36:53	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:04.813   00:36:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:04.813   00:36:53	-- common/autotest_common.sh@10 -- # set +x
00:06:04.813  ************************************
00:06:04.813  START TEST nvme_mount
00:06:04.813  ************************************
00:06:04.813   00:36:53	-- common/autotest_common.sh@1114 -- # nvme_mount
00:06:04.813   00:36:53	-- setup/devices.sh@95 -- # nvme_disk=nvme0n1
00:06:04.813   00:36:53	-- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1
00:06:04.813   00:36:53	-- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:04.813   00:36:53	-- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:04.813   00:36:53	-- setup/devices.sh@101 -- # partition_drive nvme0n1 1
00:06:04.813   00:36:53	-- setup/common.sh@39 -- # local disk=nvme0n1
00:06:04.813   00:36:53	-- setup/common.sh@40 -- # local part_no=1
00:06:04.813   00:36:53	-- setup/common.sh@41 -- # local size=1073741824
00:06:04.813   00:36:53	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:06:04.813   00:36:53	-- setup/common.sh@44 -- # parts=()
00:06:04.813   00:36:53	-- setup/common.sh@44 -- # local parts
00:06:04.813   00:36:53	-- setup/common.sh@46 -- # (( part = 1 ))
00:06:04.813   00:36:53	-- setup/common.sh@46 -- # (( part <= part_no ))
00:06:04.813   00:36:53	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:06:04.813   00:36:53	-- setup/common.sh@46 -- # (( part++ ))
00:06:04.813   00:36:53	-- setup/common.sh@46 -- # (( part <= part_no ))
00:06:04.813   00:36:53	-- setup/common.sh@51 -- # (( size /= 512 ))
00:06:04.813   00:36:53	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:06:04.813   00:36:53	-- setup/common.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1
00:06:05.752  Creating new GPT entries in memory.
00:06:05.752  GPT data structures destroyed! You may now partition the disk using fdisk or
00:06:05.752  other utilities.
00:06:05.752   00:36:54	-- setup/common.sh@57 -- # (( part = 1 ))
00:06:05.752   00:36:54	-- setup/common.sh@57 -- # (( part <= part_no ))
00:06:05.752   00:36:54	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:06:05.752   00:36:54	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:06:05.752   00:36:54	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199
00:06:06.696  Creating new GPT entries in memory.
00:06:06.696  The operation has completed successfully.
00:06:06.696   00:36:55	-- setup/common.sh@57 -- # (( part++ ))
00:06:06.696   00:36:55	-- setup/common.sh@57 -- # (( part <= part_no ))
00:06:06.696   00:36:55	-- setup/common.sh@62 -- # wait 925300
00:06:06.696   00:36:55	-- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:06.696   00:36:55	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount size=
00:06:06.696   00:36:55	-- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:06.696   00:36:55	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]]
00:06:06.696   00:36:55	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1
00:06:06.955   00:36:55	-- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:06.955   00:36:56	-- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:06.955   00:36:56	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:06:06.955   00:36:56	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1
00:06:06.955   00:36:56	-- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:06.955   00:36:56	-- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:06.955   00:36:56	-- setup/devices.sh@53 -- # local found=0
00:06:06.955   00:36:56	-- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:06:06.955   00:36:56	-- setup/devices.sh@56 -- # :
00:06:06.955   00:36:56	-- setup/devices.sh@59 -- # local pci status
00:06:06.955   00:36:56	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:06.955    00:36:56	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:06:06.955    00:36:56	-- setup/devices.sh@47 -- # setup output config
00:06:06.955    00:36:56	-- setup/common.sh@9 -- # [[ output == output ]]
00:06:06.955    00:36:56	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:06:10.247   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.247   00:36:59	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]]
00:06:10.247   00:36:59	-- setup/devices.sh@63 -- # found=1
00:06:10.247   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.247   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.247   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.247   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.247   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.247   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.247   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.247   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.248   00:36:59	-- setup/devices.sh@66 -- # (( found == 1 ))
00:06:10.248   00:36:59	-- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount ]]
00:06:10.248   00:36:59	-- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:10.248   00:36:59	-- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:06:10.248   00:36:59	-- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:10.248   00:36:59	-- setup/devices.sh@110 -- # cleanup_nvme
00:06:10.248   00:36:59	-- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:10.248   00:36:59	-- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:10.248   00:36:59	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:06:10.248  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:06:10.248   00:36:59	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:06:10.248   00:36:59	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:06:10.507  /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
00:06:10.507  /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
00:06:10.507  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:06:10.507  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:06:10.507   00:36:59	-- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 1024M
00:06:10.507   00:36:59	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount size=1024M
00:06:10.507   00:36:59	-- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:10.507   00:36:59	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]]
00:06:10.507   00:36:59	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M
00:06:10.508   00:36:59	-- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:10.508   00:36:59	-- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:10.508   00:36:59	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:06:10.508   00:36:59	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1
00:06:10.508   00:36:59	-- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:10.508   00:36:59	-- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:10.508   00:36:59	-- setup/devices.sh@53 -- # local found=0
00:06:10.508   00:36:59	-- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:06:10.508   00:36:59	-- setup/devices.sh@56 -- # :
00:06:10.508   00:36:59	-- setup/devices.sh@59 -- # local pci status
00:06:10.508   00:36:59	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:10.508    00:36:59	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:06:10.508    00:36:59	-- setup/devices.sh@47 -- # setup output config
00:06:10.508    00:36:59	-- setup/common.sh@9 -- # [[ output == output ]]
00:06:10.508    00:36:59	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]]
00:06:13.801   00:37:02	-- setup/devices.sh@63 -- # found=1
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801   00:37:02	-- setup/devices.sh@66 -- # (( found == 1 ))
00:06:13.801   00:37:02	-- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount ]]
00:06:13.801   00:37:02	-- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:13.801   00:37:02	-- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:06:13.801   00:37:02	-- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:06:13.801   00:37:02	-- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:13.801   00:37:02	-- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' ''
00:06:13.801   00:37:02	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:06:13.801   00:37:02	-- setup/devices.sh@49 -- # local mounts=data@nvme0n1
00:06:13.801   00:37:02	-- setup/devices.sh@50 -- # local mount_point=
00:06:13.801   00:37:02	-- setup/devices.sh@51 -- # local test_file=
00:06:13.801   00:37:02	-- setup/devices.sh@53 -- # local found=0
00:06:13.801   00:37:02	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:06:13.801   00:37:02	-- setup/devices.sh@59 -- # local pci status
00:06:13.801   00:37:02	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:13.801    00:37:02	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:06:13.801    00:37:02	-- setup/devices.sh@47 -- # setup output config
00:06:13.801    00:37:02	-- setup/common.sh@9 -- # [[ output == output ]]
00:06:13.801    00:37:02	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]]
00:06:17.097   00:37:05	-- setup/devices.sh@63 -- # found=1
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:05	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:17.097   00:37:05	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:17.097   00:37:06	-- setup/devices.sh@66 -- # (( found == 1 ))
00:06:17.097   00:37:06	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:06:17.097   00:37:06	-- setup/devices.sh@68 -- # return 0
00:06:17.097   00:37:06	-- setup/devices.sh@128 -- # cleanup_nvme
00:06:17.097   00:37:06	-- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:17.097   00:37:06	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:06:17.097   00:37:06	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:06:17.097   00:37:06	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:06:17.097  /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:06:17.097  
00:06:17.097  real	0m12.335s
00:06:17.097  user	0m3.437s
00:06:17.097  sys	0m6.793s
00:06:17.097   00:37:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:17.097   00:37:06	-- common/autotest_common.sh@10 -- # set +x
00:06:17.097  ************************************
00:06:17.097  END TEST nvme_mount
00:06:17.097  ************************************
00:06:17.097   00:37:06	-- setup/devices.sh@214 -- # run_test dm_mount dm_mount
00:06:17.097   00:37:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:17.097   00:37:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:17.097   00:37:06	-- common/autotest_common.sh@10 -- # set +x
00:06:17.097  ************************************
00:06:17.097  START TEST dm_mount
00:06:17.097  ************************************
00:06:17.097   00:37:06	-- common/autotest_common.sh@1114 -- # dm_mount
00:06:17.097   00:37:06	-- setup/devices.sh@144 -- # pv=nvme0n1
00:06:17.097   00:37:06	-- setup/devices.sh@145 -- # pv0=nvme0n1p1
00:06:17.097   00:37:06	-- setup/devices.sh@146 -- # pv1=nvme0n1p2
00:06:17.097   00:37:06	-- setup/devices.sh@148 -- # partition_drive nvme0n1
00:06:17.097   00:37:06	-- setup/common.sh@39 -- # local disk=nvme0n1
00:06:17.097   00:37:06	-- setup/common.sh@40 -- # local part_no=2
00:06:17.097   00:37:06	-- setup/common.sh@41 -- # local size=1073741824
00:06:17.097   00:37:06	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:06:17.097   00:37:06	-- setup/common.sh@44 -- # parts=()
00:06:17.097   00:37:06	-- setup/common.sh@44 -- # local parts
00:06:17.097   00:37:06	-- setup/common.sh@46 -- # (( part = 1 ))
00:06:17.097   00:37:06	-- setup/common.sh@46 -- # (( part <= part_no ))
00:06:17.097   00:37:06	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:06:17.097   00:37:06	-- setup/common.sh@46 -- # (( part++ ))
00:06:17.097   00:37:06	-- setup/common.sh@46 -- # (( part <= part_no ))
00:06:17.097   00:37:06	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:06:17.097   00:37:06	-- setup/common.sh@46 -- # (( part++ ))
00:06:17.097   00:37:06	-- setup/common.sh@46 -- # (( part <= part_no ))
00:06:17.097   00:37:06	-- setup/common.sh@51 -- # (( size /= 512 ))
00:06:17.097   00:37:06	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:06:17.097   00:37:06	-- setup/common.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2
00:06:18.036  Creating new GPT entries in memory.
00:06:18.036  GPT data structures destroyed! You may now partition the disk using fdisk or
00:06:18.036  other utilities.
00:06:18.036   00:37:07	-- setup/common.sh@57 -- # (( part = 1 ))
00:06:18.036   00:37:07	-- setup/common.sh@57 -- # (( part <= part_no ))
00:06:18.036   00:37:07	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:06:18.036   00:37:07	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:06:18.036   00:37:07	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199
00:06:19.416  Creating new GPT entries in memory.
00:06:19.416  The operation has completed successfully.
00:06:19.416   00:37:08	-- setup/common.sh@57 -- # (( part++ ))
00:06:19.416   00:37:08	-- setup/common.sh@57 -- # (( part <= part_no ))
00:06:19.416   00:37:08	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:06:19.416   00:37:08	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:06:19.416   00:37:08	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351
00:06:20.356  The operation has completed successfully.
00:06:20.356   00:37:09	-- setup/common.sh@57 -- # (( part++ ))
00:06:20.356   00:37:09	-- setup/common.sh@57 -- # (( part <= part_no ))
00:06:20.356   00:37:09	-- setup/common.sh@62 -- # wait 929137
00:06:20.356   00:37:09	-- setup/devices.sh@150 -- # dm_name=nvme_dm_test
00:06:20.356   00:37:09	-- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:20.356   00:37:09	-- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:06:20.356   00:37:09	-- setup/devices.sh@155 -- # dmsetup create nvme_dm_test
00:06:20.356   00:37:09	-- setup/devices.sh@160 -- # for t in {1..5}
00:06:20.356   00:37:09	-- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:06:20.356   00:37:09	-- setup/devices.sh@161 -- # break
00:06:20.356   00:37:09	-- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:06:20.356    00:37:09	-- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test
00:06:20.356   00:37:09	-- setup/devices.sh@165 -- # dm=/dev/dm-0
00:06:20.356   00:37:09	-- setup/devices.sh@166 -- # dm=dm-0
00:06:20.356   00:37:09	-- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]]
00:06:20.356   00:37:09	-- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]]
00:06:20.356   00:37:09	-- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:20.356   00:37:09	-- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount size=
00:06:20.356   00:37:09	-- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:20.356   00:37:09	-- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:06:20.356   00:37:09	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test
00:06:20.356   00:37:09	-- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:20.356   00:37:09	-- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:06:20.356   00:37:09	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:06:20.356   00:37:09	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test
00:06:20.356   00:37:09	-- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:20.356   00:37:09	-- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:06:20.356   00:37:09	-- setup/devices.sh@53 -- # local found=0
00:06:20.356   00:37:09	-- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm ]]
00:06:20.356   00:37:09	-- setup/devices.sh@56 -- # :
00:06:20.356   00:37:09	-- setup/devices.sh@59 -- # local pci status
00:06:20.356   00:37:09	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:20.356    00:37:09	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:06:20.356    00:37:09	-- setup/devices.sh@47 -- # setup output config
00:06:20.356    00:37:09	-- setup/common.sh@9 -- # [[ output == output ]]
00:06:20.356    00:37:09	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]]
00:06:23.652   00:37:12	-- setup/devices.sh@63 -- # found=1
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652   00:37:12	-- setup/devices.sh@66 -- # (( found == 1 ))
00:06:23.652   00:37:12	-- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount ]]
00:06:23.652   00:37:12	-- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:23.652   00:37:12	-- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm ]]
00:06:23.652   00:37:12	-- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:06:23.652   00:37:12	-- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:23.652   00:37:12	-- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' ''
00:06:23.652   00:37:12	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:06:23.652   00:37:12	-- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0
00:06:23.652   00:37:12	-- setup/devices.sh@50 -- # local mount_point=
00:06:23.652   00:37:12	-- setup/devices.sh@51 -- # local test_file=
00:06:23.652   00:37:12	-- setup/devices.sh@53 -- # local found=0
00:06:23.652   00:37:12	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:06:23.652   00:37:12	-- setup/devices.sh@59 -- # local pci status
00:06:23.652   00:37:12	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:23.652    00:37:12	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:06:23.652    00:37:12	-- setup/devices.sh@47 -- # setup output config
00:06:23.652    00:37:12	-- setup/common.sh@9 -- # [[ output == output ]]
00:06:23.652    00:37:12	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]]
00:06:26.955   00:37:15	-- setup/devices.sh@63 -- # found=1
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:15	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:06:26.955   00:37:15	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:06:26.955   00:37:16	-- setup/devices.sh@66 -- # (( found == 1 ))
00:06:26.955   00:37:16	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:06:26.955   00:37:16	-- setup/devices.sh@68 -- # return 0
00:06:26.955   00:37:16	-- setup/devices.sh@187 -- # cleanup_dm
00:06:26.955   00:37:16	-- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:26.955   00:37:16	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:06:26.955   00:37:16	-- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test
00:06:26.955   00:37:16	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:06:26.955   00:37:16	-- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1
00:06:26.955  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:06:26.955   00:37:16	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:06:26.955   00:37:16	-- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2
00:06:26.955  
00:06:26.955  real	0m9.970s
00:06:26.955  user	0m2.389s
00:06:26.955  sys	0m4.673s
00:06:26.955   00:37:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:26.955   00:37:16	-- common/autotest_common.sh@10 -- # set +x
00:06:26.955  ************************************
00:06:26.955  END TEST dm_mount
00:06:26.955  ************************************
00:06:26.955   00:37:16	-- setup/devices.sh@1 -- # cleanup
00:06:26.955   00:37:16	-- setup/devices.sh@11 -- # cleanup_nvme
00:06:26.955   00:37:16	-- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:06:27.215   00:37:16	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:06:27.215   00:37:16	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:06:27.215   00:37:16	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:06:27.215   00:37:16	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:06:27.477  /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
00:06:27.477  /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
00:06:27.477  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:06:27.477  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:06:27.477   00:37:16	-- setup/devices.sh@12 -- # cleanup_dm
00:06:27.477   00:37:16	-- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:06:27.477   00:37:16	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:06:27.477   00:37:16	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:06:27.477   00:37:16	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:06:27.477   00:37:16	-- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]]
00:06:27.477   00:37:16	-- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1
00:06:27.477  
00:06:27.477  real	0m26.641s
00:06:27.477  user	0m7.291s
00:06:27.477  sys	0m14.259s
00:06:27.477   00:37:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:27.477   00:37:16	-- common/autotest_common.sh@10 -- # set +x
00:06:27.477  ************************************
00:06:27.477  END TEST devices
00:06:27.477  ************************************
00:06:27.477  
00:06:27.477  real	1m35.386s
00:06:27.477  user	0m27.687s
00:06:27.477  sys	0m51.763s
00:06:27.477   00:37:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:27.477   00:37:16	-- common/autotest_common.sh@10 -- # set +x
00:06:27.477  ************************************
00:06:27.477  END TEST setup.sh
00:06:27.477  ************************************
00:06:27.477   00:37:16	-- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status
00:06:30.768  Hugepages
00:06:30.768  node     hugesize     free /  total
00:06:30.768  node0   1048576kB        0 /      0
00:06:30.768  node0      2048kB     2048 /   2048
00:06:30.768  node1   1048576kB        0 /      0
00:06:30.768  node1      2048kB        0 /      0
00:06:30.768  
00:06:30.768  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:30.768  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:06:30.768  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:06:30.768  NVMe                      0000:5e:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:06:30.768  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:06:30.768  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:06:30.768    00:37:19	-- spdk/autotest.sh@128 -- # uname -s
00:06:30.768   00:37:19	-- spdk/autotest.sh@128 -- # [[ Linux == Linux ]]
00:06:30.768   00:37:19	-- spdk/autotest.sh@130 -- # nvme_namespace_revert
00:06:30.768   00:37:19	-- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:06:34.059  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:34.059  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:37.358  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:06:37.358   00:37:26	-- common/autotest_common.sh@1527 -- # sleep 1
00:06:38.296   00:37:27	-- common/autotest_common.sh@1528 -- # bdfs=()
00:06:38.297   00:37:27	-- common/autotest_common.sh@1528 -- # local bdfs
00:06:38.297   00:37:27	-- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs))
00:06:38.297    00:37:27	-- common/autotest_common.sh@1529 -- # get_nvme_bdfs
00:06:38.297    00:37:27	-- common/autotest_common.sh@1508 -- # bdfs=()
00:06:38.297    00:37:27	-- common/autotest_common.sh@1508 -- # local bdfs
00:06:38.297    00:37:27	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:38.297     00:37:27	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:38.297     00:37:27	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:06:38.297    00:37:27	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:06:38.297    00:37:27	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:06:38.297   00:37:27	-- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:06:41.586  Waiting for block devices as requested
00:06:41.586  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:06:41.586  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:06:41.586  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:06:41.586  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:06:41.846  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:06:41.846  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:06:41.846  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:06:42.105  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:06:42.105  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:06:42.105  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:06:42.363  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:06:42.363  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:06:42.363  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:06:42.622  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:06:42.622  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:06:42.622  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:06:42.881  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:06:42.881   00:37:31	-- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}"
00:06:42.881    00:37:31	-- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0
00:06:42.881     00:37:31	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0
00:06:42.881     00:37:31	-- common/autotest_common.sh@1497 -- # grep 0000:5e:00.0/nvme/nvme
00:06:42.881    00:37:31	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:06:42.881    00:37:31	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]]
00:06:42.881     00:37:31	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:06:42.881    00:37:32	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:06:42.881   00:37:32	-- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0
00:06:42.881   00:37:32	-- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]]
00:06:42.881    00:37:32	-- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:42.881    00:37:32	-- common/autotest_common.sh@1540 -- # grep oacs
00:06:42.881    00:37:32	-- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:42.881   00:37:32	-- common/autotest_common.sh@1540 -- # oacs=' 0xe'
00:06:42.881   00:37:32	-- common/autotest_common.sh@1541 -- # oacs_ns_manage=8
00:06:42.881   00:37:32	-- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]]
00:06:42.881    00:37:32	-- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0
00:06:42.881    00:37:32	-- common/autotest_common.sh@1549 -- # grep unvmcap
00:06:42.881    00:37:32	-- common/autotest_common.sh@1549 -- # cut -d: -f2
00:06:42.881   00:37:32	-- common/autotest_common.sh@1549 -- # unvmcap=' 0'
00:06:42.881   00:37:32	-- common/autotest_common.sh@1550 -- # [[  0 -eq 0 ]]
00:06:42.881   00:37:32	-- common/autotest_common.sh@1552 -- # continue
00:06:42.881   00:37:32	-- spdk/autotest.sh@133 -- # timing_exit pre_cleanup
00:06:42.881   00:37:32	-- common/autotest_common.sh@728 -- # xtrace_disable
00:06:42.881   00:37:32	-- common/autotest_common.sh@10 -- # set +x
00:06:42.881   00:37:32	-- spdk/autotest.sh@136 -- # timing_enter afterboot
00:06:42.881   00:37:32	-- common/autotest_common.sh@722 -- # xtrace_disable
00:06:42.881   00:37:32	-- common/autotest_common.sh@10 -- # set +x
00:06:42.881   00:37:32	-- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:06:46.242  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:46.242  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:49.562  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:06:49.562   00:37:38	-- spdk/autotest.sh@138 -- # timing_exit afterboot
00:06:49.562   00:37:38	-- common/autotest_common.sh@728 -- # xtrace_disable
00:06:49.562   00:37:38	-- common/autotest_common.sh@10 -- # set +x
00:06:49.562   00:37:38	-- spdk/autotest.sh@142 -- # opal_revert_cleanup
00:06:49.562   00:37:38	-- common/autotest_common.sh@1586 -- # mapfile -t bdfs
00:06:49.562    00:37:38	-- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54
00:06:49.562    00:37:38	-- common/autotest_common.sh@1572 -- # bdfs=()
00:06:49.562    00:37:38	-- common/autotest_common.sh@1572 -- # local bdfs
00:06:49.562     00:37:38	-- common/autotest_common.sh@1574 -- # get_nvme_bdfs
00:06:49.562     00:37:38	-- common/autotest_common.sh@1508 -- # bdfs=()
00:06:49.562     00:37:38	-- common/autotest_common.sh@1508 -- # local bdfs
00:06:49.562     00:37:38	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:49.562      00:37:38	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:49.562      00:37:38	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:06:49.562     00:37:38	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:06:49.562     00:37:38	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:06:49.562    00:37:38	-- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs)
00:06:49.562     00:37:38	-- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device
00:06:49.562    00:37:38	-- common/autotest_common.sh@1575 -- # device=0x0a54
00:06:49.562    00:37:38	-- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:06:49.562    00:37:38	-- common/autotest_common.sh@1577 -- # bdfs+=($bdf)
00:06:49.562    00:37:38	-- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:5e:00.0
00:06:49.562   00:37:38	-- common/autotest_common.sh@1587 -- # [[ -z 0000:5e:00.0 ]]
00:06:49.562   00:37:38	-- common/autotest_common.sh@1592 -- # spdk_tgt_pid=937421
00:06:49.562   00:37:38	-- common/autotest_common.sh@1593 -- # waitforlisten 937421
00:06:49.562   00:37:38	-- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:06:49.562   00:37:38	-- common/autotest_common.sh@829 -- # '[' -z 937421 ']'
00:06:49.562   00:37:38	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:49.562   00:37:38	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:49.562   00:37:38	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:49.562  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:49.562   00:37:38	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:49.562   00:37:38	-- common/autotest_common.sh@10 -- # set +x
00:06:49.562  [2024-12-17 00:37:38.793466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:49.562  [2024-12-17 00:37:38.793537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937421 ]
00:06:49.822  EAL: No free 2048 kB hugepages reported on node 1
00:06:49.822  [2024-12-17 00:37:38.901655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:49.822  [2024-12-17 00:37:38.953118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:49.822  [2024-12-17 00:37:38.953279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:50.081  [2024-12-17 00:37:39.123698] 'OCF_Core' volume operations registered
00:06:50.081  [2024-12-17 00:37:39.125869] 'OCF_Cache' volume operations registered
00:06:50.081  [2024-12-17 00:37:39.128541] 'OCF Composite' volume operations registered
00:06:50.081  [2024-12-17 00:37:39.130737] 'SPDK_block_device' volume operations registered
00:06:50.650   00:37:39	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:50.650   00:37:39	-- common/autotest_common.sh@862 -- # return 0
00:06:50.650   00:37:39	-- common/autotest_common.sh@1595 -- # bdf_id=0
00:06:50.650   00:37:39	-- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}"
00:06:50.650   00:37:39	-- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0
00:06:53.941  nvme0n1
00:06:53.941   00:37:42	-- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:06:53.941  [2024-12-17 00:37:43.090159] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal
00:06:53.941  request:
00:06:53.941  {
00:06:53.941    "nvme_ctrlr_name": "nvme0",
00:06:53.941    "password": "test",
00:06:53.941    "method": "bdev_nvme_opal_revert",
00:06:53.941    "req_id": 1
00:06:53.942  }
00:06:53.942  Got JSON-RPC error response
00:06:53.942  response:
00:06:53.942  {
00:06:53.942    "code": -32602,
00:06:53.942    "message": "Invalid parameters"
00:06:53.942  }
00:06:53.942   00:37:43	-- common/autotest_common.sh@1599 -- # true
00:06:53.942   00:37:43	-- common/autotest_common.sh@1600 -- # (( ++bdf_id ))
00:06:53.942   00:37:43	-- common/autotest_common.sh@1603 -- # killprocess 937421
00:06:53.942   00:37:43	-- common/autotest_common.sh@936 -- # '[' -z 937421 ']'
00:06:53.942   00:37:43	-- common/autotest_common.sh@940 -- # kill -0 937421
00:06:53.942    00:37:43	-- common/autotest_common.sh@941 -- # uname
00:06:53.942   00:37:43	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:53.942    00:37:43	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 937421
00:06:53.942   00:37:43	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:53.942   00:37:43	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:53.942   00:37:43	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 937421'
00:06:53.942  killing process with pid 937421
00:06:53.942   00:37:43	-- common/autotest_common.sh@955 -- # kill 937421
00:06:53.942   00:37:43	-- common/autotest_common.sh@960 -- # wait 937421
00:06:58.134   00:37:47	-- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']'
00:06:58.134   00:37:47	-- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']'
00:06:58.134   00:37:47	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:06:58.134   00:37:47	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:06:58.134   00:37:47	-- spdk/autotest.sh@160 -- # timing_enter lib
00:06:58.134   00:37:47	-- common/autotest_common.sh@722 -- # xtrace_disable
00:06:58.134   00:37:47	-- common/autotest_common.sh@10 -- # set +x
00:06:58.134   00:37:47	-- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh
00:06:58.134   00:37:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:58.134   00:37:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:58.134   00:37:47	-- common/autotest_common.sh@10 -- # set +x
00:06:58.134  ************************************
00:06:58.134  START TEST env
00:06:58.134  ************************************
00:06:58.134   00:37:47	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh
00:06:58.393  * Looking for test storage...
00:06:58.393  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env
00:06:58.393    00:37:47	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:58.393     00:37:47	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:58.393     00:37:47	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:58.393    00:37:47	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:58.393    00:37:47	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:58.393    00:37:47	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:58.393    00:37:47	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:58.394    00:37:47	-- scripts/common.sh@335 -- # IFS=.-:
00:06:58.394    00:37:47	-- scripts/common.sh@335 -- # read -ra ver1
00:06:58.394    00:37:47	-- scripts/common.sh@336 -- # IFS=.-:
00:06:58.394    00:37:47	-- scripts/common.sh@336 -- # read -ra ver2
00:06:58.394    00:37:47	-- scripts/common.sh@337 -- # local 'op=<'
00:06:58.394    00:37:47	-- scripts/common.sh@339 -- # ver1_l=2
00:06:58.394    00:37:47	-- scripts/common.sh@340 -- # ver2_l=1
00:06:58.394    00:37:47	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:58.394    00:37:47	-- scripts/common.sh@343 -- # case "$op" in
00:06:58.394    00:37:47	-- scripts/common.sh@344 -- # : 1
00:06:58.394    00:37:47	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:58.394    00:37:47	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:58.394     00:37:47	-- scripts/common.sh@364 -- # decimal 1
00:06:58.394     00:37:47	-- scripts/common.sh@352 -- # local d=1
00:06:58.394     00:37:47	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:58.394     00:37:47	-- scripts/common.sh@354 -- # echo 1
00:06:58.394    00:37:47	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:58.394     00:37:47	-- scripts/common.sh@365 -- # decimal 2
00:06:58.394     00:37:47	-- scripts/common.sh@352 -- # local d=2
00:06:58.394     00:37:47	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:58.394     00:37:47	-- scripts/common.sh@354 -- # echo 2
00:06:58.394    00:37:47	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:58.394    00:37:47	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:58.394    00:37:47	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:58.394    00:37:47	-- scripts/common.sh@367 -- # return 0
00:06:58.394    00:37:47	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:58.394    00:37:47	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:58.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.394  		--rc genhtml_branch_coverage=1
00:06:58.394  		--rc genhtml_function_coverage=1
00:06:58.394  		--rc genhtml_legend=1
00:06:58.394  		--rc geninfo_all_blocks=1
00:06:58.394  		--rc geninfo_unexecuted_blocks=1
00:06:58.394  		
00:06:58.394  		'
00:06:58.394    00:37:47	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:58.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.394  		--rc genhtml_branch_coverage=1
00:06:58.394  		--rc genhtml_function_coverage=1
00:06:58.394  		--rc genhtml_legend=1
00:06:58.394  		--rc geninfo_all_blocks=1
00:06:58.394  		--rc geninfo_unexecuted_blocks=1
00:06:58.394  		
00:06:58.394  		'
00:06:58.394    00:37:47	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:58.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.394  		--rc genhtml_branch_coverage=1
00:06:58.394  		--rc genhtml_function_coverage=1
00:06:58.394  		--rc genhtml_legend=1
00:06:58.394  		--rc geninfo_all_blocks=1
00:06:58.394  		--rc geninfo_unexecuted_blocks=1
00:06:58.394  		
00:06:58.394  		'
00:06:58.394    00:37:47	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:58.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.394  		--rc genhtml_branch_coverage=1
00:06:58.394  		--rc genhtml_function_coverage=1
00:06:58.394  		--rc genhtml_legend=1
00:06:58.394  		--rc geninfo_all_blocks=1
00:06:58.394  		--rc geninfo_unexecuted_blocks=1
00:06:58.394  		
00:06:58.394  		'
00:06:58.394   00:37:47	-- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut
00:06:58.394   00:37:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:58.394   00:37:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:58.394   00:37:47	-- common/autotest_common.sh@10 -- # set +x
00:06:58.394  ************************************
00:06:58.394  START TEST env_memory
00:06:58.394  ************************************
00:06:58.394   00:37:47	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut
00:06:58.394  
00:06:58.394  
00:06:58.394       CUnit - A unit testing framework for C - Version 2.1-3
00:06:58.394       http://cunit.sourceforge.net/
00:06:58.394  
00:06:58.394  
00:06:58.394  Suite: memory
00:06:58.394    Test: alloc and free memory map ...[2024-12-17 00:37:47.589487] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:58.394  passed
00:06:58.394    Test: mem map translation ...[2024-12-17 00:37:47.618689] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:58.394  [2024-12-17 00:37:47.618713] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:58.394  [2024-12-17 00:37:47.618768] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:58.394  [2024-12-17 00:37:47.618781] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:58.654  passed
00:06:58.654    Test: mem map registration ...[2024-12-17 00:37:47.676525] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234
00:06:58.654  [2024-12-17 00:37:47.676551] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152
00:06:58.654  passed
00:06:58.654    Test: mem map adjacent registrations ...passed
00:06:58.654  
00:06:58.654  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:58.654                suites      1      1    n/a      0        0
00:06:58.654                 tests      4      4      4      0        0
00:06:58.654               asserts    152    152    152      0      n/a
00:06:58.654  
00:06:58.654  Elapsed time =    0.201 seconds
00:06:58.654  
00:06:58.654  real	0m0.216s
00:06:58.654  user	0m0.203s
00:06:58.654  sys	0m0.012s
00:06:58.654   00:37:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:58.654   00:37:47	-- common/autotest_common.sh@10 -- # set +x
00:06:58.654  ************************************
00:06:58.654  END TEST env_memory
00:06:58.654  ************************************
00:06:58.654   00:37:47	-- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys
00:06:58.654   00:37:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:58.654   00:37:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:58.654   00:37:47	-- common/autotest_common.sh@10 -- # set +x
00:06:58.654  ************************************
00:06:58.654  START TEST env_vtophys
00:06:58.654  ************************************
00:06:58.654   00:37:47	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys
00:06:58.654  EAL: lib.eal log level changed from notice to debug
00:06:58.654  EAL: Detected lcore 0 as core 0 on socket 0
00:06:58.654  EAL: Detected lcore 1 as core 1 on socket 0
00:06:58.654  EAL: Detected lcore 2 as core 2 on socket 0
00:06:58.654  EAL: Detected lcore 3 as core 3 on socket 0
00:06:58.654  EAL: Detected lcore 4 as core 4 on socket 0
00:06:58.654  EAL: Detected lcore 5 as core 8 on socket 0
00:06:58.654  EAL: Detected lcore 6 as core 9 on socket 0
00:06:58.654  EAL: Detected lcore 7 as core 10 on socket 0
00:06:58.654  EAL: Detected lcore 8 as core 11 on socket 0
00:06:58.654  EAL: Detected lcore 9 as core 16 on socket 0
00:06:58.654  EAL: Detected lcore 10 as core 17 on socket 0
00:06:58.654  EAL: Detected lcore 11 as core 18 on socket 0
00:06:58.654  EAL: Detected lcore 12 as core 19 on socket 0
00:06:58.654  EAL: Detected lcore 13 as core 20 on socket 0
00:06:58.654  EAL: Detected lcore 14 as core 24 on socket 0
00:06:58.654  EAL: Detected lcore 15 as core 25 on socket 0
00:06:58.654  EAL: Detected lcore 16 as core 26 on socket 0
00:06:58.654  EAL: Detected lcore 17 as core 27 on socket 0
00:06:58.654  EAL: Detected lcore 18 as core 0 on socket 1
00:06:58.654  EAL: Detected lcore 19 as core 1 on socket 1
00:06:58.654  EAL: Detected lcore 20 as core 2 on socket 1
00:06:58.654  EAL: Detected lcore 21 as core 3 on socket 1
00:06:58.654  EAL: Detected lcore 22 as core 4 on socket 1
00:06:58.654  EAL: Detected lcore 23 as core 8 on socket 1
00:06:58.654  EAL: Detected lcore 24 as core 9 on socket 1
00:06:58.654  EAL: Detected lcore 25 as core 10 on socket 1
00:06:58.654  EAL: Detected lcore 26 as core 11 on socket 1
00:06:58.654  EAL: Detected lcore 27 as core 16 on socket 1
00:06:58.654  EAL: Detected lcore 28 as core 17 on socket 1
00:06:58.654  EAL: Detected lcore 29 as core 18 on socket 1
00:06:58.654  EAL: Detected lcore 30 as core 19 on socket 1
00:06:58.654  EAL: Detected lcore 31 as core 20 on socket 1
00:06:58.654  EAL: Detected lcore 32 as core 24 on socket 1
00:06:58.654  EAL: Detected lcore 33 as core 25 on socket 1
00:06:58.654  EAL: Detected lcore 34 as core 26 on socket 1
00:06:58.654  EAL: Detected lcore 35 as core 27 on socket 1
00:06:58.654  EAL: Detected lcore 36 as core 0 on socket 0
00:06:58.654  EAL: Detected lcore 37 as core 1 on socket 0
00:06:58.654  EAL: Detected lcore 38 as core 2 on socket 0
00:06:58.654  EAL: Detected lcore 39 as core 3 on socket 0
00:06:58.654  EAL: Detected lcore 40 as core 4 on socket 0
00:06:58.654  EAL: Detected lcore 41 as core 8 on socket 0
00:06:58.654  EAL: Detected lcore 42 as core 9 on socket 0
00:06:58.654  EAL: Detected lcore 43 as core 10 on socket 0
00:06:58.654  EAL: Detected lcore 44 as core 11 on socket 0
00:06:58.654  EAL: Detected lcore 45 as core 16 on socket 0
00:06:58.654  EAL: Detected lcore 46 as core 17 on socket 0
00:06:58.654  EAL: Detected lcore 47 as core 18 on socket 0
00:06:58.654  EAL: Detected lcore 48 as core 19 on socket 0
00:06:58.654  EAL: Detected lcore 49 as core 20 on socket 0
00:06:58.654  EAL: Detected lcore 50 as core 24 on socket 0
00:06:58.654  EAL: Detected lcore 51 as core 25 on socket 0
00:06:58.654  EAL: Detected lcore 52 as core 26 on socket 0
00:06:58.654  EAL: Detected lcore 53 as core 27 on socket 0
00:06:58.654  EAL: Detected lcore 54 as core 0 on socket 1
00:06:58.654  EAL: Detected lcore 55 as core 1 on socket 1
00:06:58.654  EAL: Detected lcore 56 as core 2 on socket 1
00:06:58.654  EAL: Detected lcore 57 as core 3 on socket 1
00:06:58.654  EAL: Detected lcore 58 as core 4 on socket 1
00:06:58.654  EAL: Detected lcore 59 as core 8 on socket 1
00:06:58.654  EAL: Detected lcore 60 as core 9 on socket 1
00:06:58.654  EAL: Detected lcore 61 as core 10 on socket 1
00:06:58.654  EAL: Detected lcore 62 as core 11 on socket 1
00:06:58.654  EAL: Detected lcore 63 as core 16 on socket 1
00:06:58.654  EAL: Detected lcore 64 as core 17 on socket 1
00:06:58.654  EAL: Detected lcore 65 as core 18 on socket 1
00:06:58.654  EAL: Detected lcore 66 as core 19 on socket 1
00:06:58.654  EAL: Detected lcore 67 as core 20 on socket 1
00:06:58.654  EAL: Detected lcore 68 as core 24 on socket 1
00:06:58.654  EAL: Detected lcore 69 as core 25 on socket 1
00:06:58.654  EAL: Detected lcore 70 as core 26 on socket 1
00:06:58.654  EAL: Detected lcore 71 as core 27 on socket 1
00:06:58.654  EAL: Maximum logical cores by configuration: 128
00:06:58.654  EAL: Detected CPU lcores: 72
00:06:58.654  EAL: Detected NUMA nodes: 2
00:06:58.654  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:06:58.654  EAL: Detected shared linkage of DPDK
00:06:58.654  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0
00:06:58.654  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0
00:06:58.654  EAL: Registered [vdev] bus.
00:06:58.654  EAL: bus.vdev log level changed from disabled to notice
00:06:58.654  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0
00:06:58.654  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0
00:06:58.654  EAL: pmd.net.i40e.init log level changed from disabled to notice
00:06:58.654  EAL: pmd.net.i40e.driver log level changed from disabled to notice
00:06:58.655  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so
00:06:58.655  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so
00:06:58.655  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so
00:06:58.655  EAL: open shared lib /var/jenkins/workspace/nvme-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so
00:06:58.655  EAL: No shared files mode enabled, IPC will be disabled
00:06:58.655  EAL: No shared files mode enabled, IPC is disabled
00:06:58.655  EAL: Bus pci wants IOVA as 'DC'
00:06:58.655  EAL: Bus vdev wants IOVA as 'DC'
00:06:58.655  EAL: Buses did not request a specific IOVA mode.
00:06:58.655  EAL: IOMMU is available, selecting IOVA as VA mode.
00:06:58.655  EAL: Selected IOVA mode 'VA'
00:06:58.655  EAL: No free 2048 kB hugepages reported on node 1
00:06:58.655  EAL: Probing VFIO support...
00:06:58.655  EAL: IOMMU type 1 (Type 1) is supported
00:06:58.655  EAL: IOMMU type 7 (sPAPR) is not supported
00:06:58.655  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:06:58.655  EAL: VFIO support initialized
00:06:58.655  EAL: Ask a virtual area of 0x2e000 bytes
00:06:58.655  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:58.655  EAL: Setting up physically contiguous memory...
00:06:58.655  EAL: Setting maximum number of open files to 524288
00:06:58.655  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:58.655  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:06:58.655  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:58.655  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:06:58.655  EAL: Ask a virtual area of 0x61000 bytes
00:06:58.655  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:06:58.655  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:58.655  EAL: Ask a virtual area of 0x400000000 bytes
00:06:58.655  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:06:58.655  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:06:58.655  EAL: Hugepages will be freed exactly as allocated.
00:06:58.655  EAL: No shared files mode enabled, IPC is disabled
00:06:58.655  EAL: No shared files mode enabled, IPC is disabled
00:06:58.655  EAL: TSC frequency is ~2300000 KHz
00:06:58.655  EAL: Main lcore 0 is ready (tid=7f154c7c4a00;cpuset=[0])
00:06:58.655  EAL: Trying to obtain current memory policy.
00:06:58.655  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.655  EAL: Restoring previous memory policy: 0
00:06:58.655  EAL: request: mp_malloc_sync
00:06:58.655  EAL: No shared files mode enabled, IPC is disabled
00:06:58.655  EAL: Heap on socket 0 was expanded by 2MB
00:06:58.655  EAL: PCI device 0000:41:00.0 on NUMA socket 0
00:06:58.655  EAL:   probe driver: 8086:37d2 net_i40e
00:06:58.655  EAL:   Not managed by a supported kernel driver, skipped
00:06:58.655  EAL: PCI device 0000:41:00.1 on NUMA socket 0
00:06:58.655  EAL:   probe driver: 8086:37d2 net_i40e
00:06:58.655  EAL:   Not managed by a supported kernel driver, skipped
00:06:58.655  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:58.915  EAL: Mem event callback 'spdk:(nil)' registered
00:06:58.915  
00:06:58.915  
00:06:58.915       CUnit - A unit testing framework for C - Version 2.1-3
00:06:58.915       http://cunit.sourceforge.net/
00:06:58.915  
00:06:58.915  
00:06:58.915  Suite: components_suite
00:06:58.915    Test: vtophys_malloc_test ...passed
00:06:58.915    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 4MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 4MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 6MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 6MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 10MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 10MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 18MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 18MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 34MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 34MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 66MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 66MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 130MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was shrunk by 130MB
00:06:58.915  EAL: Trying to obtain current memory policy.
00:06:58.915  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:58.915  EAL: Restoring previous memory policy: 4
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:58.915  EAL: request: mp_malloc_sync
00:06:58.915  EAL: No shared files mode enabled, IPC is disabled
00:06:58.915  EAL: Heap on socket 0 was expanded by 258MB
00:06:58.915  EAL: Calling mem event callback 'spdk:(nil)'
00:06:59.175  EAL: request: mp_malloc_sync
00:06:59.175  EAL: No shared files mode enabled, IPC is disabled
00:06:59.175  EAL: Heap on socket 0 was shrunk by 258MB
00:06:59.175  EAL: Trying to obtain current memory policy.
00:06:59.175  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:59.175  EAL: Restoring previous memory policy: 4
00:06:59.175  EAL: Calling mem event callback 'spdk:(nil)'
00:06:59.175  EAL: request: mp_malloc_sync
00:06:59.175  EAL: No shared files mode enabled, IPC is disabled
00:06:59.175  EAL: Heap on socket 0 was expanded by 514MB
00:06:59.434  EAL: Calling mem event callback 'spdk:(nil)'
00:06:59.434  EAL: request: mp_malloc_sync
00:06:59.434  EAL: No shared files mode enabled, IPC is disabled
00:06:59.434  EAL: Heap on socket 0 was shrunk by 514MB
00:06:59.434  EAL: Trying to obtain current memory policy.
00:06:59.434  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:59.693  EAL: Restoring previous memory policy: 4
00:06:59.693  EAL: Calling mem event callback 'spdk:(nil)'
00:06:59.693  EAL: request: mp_malloc_sync
00:06:59.693  EAL: No shared files mode enabled, IPC is disabled
00:06:59.693  EAL: Heap on socket 0 was expanded by 1026MB
00:06:59.693  EAL: Calling mem event callback 'spdk:(nil)'
00:06:59.952  EAL: request: mp_malloc_sync
00:06:59.952  EAL: No shared files mode enabled, IPC is disabled
00:06:59.952  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:59.952  passed
00:06:59.952  
00:06:59.952  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:59.952                suites      1      1    n/a      0        0
00:06:59.952                 tests      2      2      2      0        0
00:06:59.952               asserts    497    497    497      0      n/a
00:06:59.952  
00:06:59.952  Elapsed time =    1.166 seconds
00:06:59.952  EAL: Calling mem event callback 'spdk:(nil)'
00:06:59.952  EAL: request: mp_malloc_sync
00:06:59.952  EAL: No shared files mode enabled, IPC is disabled
00:06:59.952  EAL: Heap on socket 0 was shrunk by 2MB
00:06:59.952  EAL: No shared files mode enabled, IPC is disabled
00:06:59.952  EAL: No shared files mode enabled, IPC is disabled
00:06:59.952  EAL: No shared files mode enabled, IPC is disabled
00:06:59.952  
00:06:59.953  real	0m1.338s
00:06:59.953  user	0m0.775s
00:06:59.953  sys	0m0.527s
00:06:59.953   00:37:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:59.953   00:37:49	-- common/autotest_common.sh@10 -- # set +x
00:06:59.953  ************************************
00:06:59.953  END TEST env_vtophys
00:06:59.953  ************************************
00:06:59.953   00:37:49	-- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut
00:06:59.953   00:37:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:59.953   00:37:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:59.953   00:37:49	-- common/autotest_common.sh@10 -- # set +x
00:06:59.953  ************************************
00:06:59.953  START TEST env_pci
00:06:59.953  ************************************
00:06:59.953   00:37:49	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut
00:06:59.953  
00:06:59.953  
00:06:59.953       CUnit - A unit testing framework for C - Version 2.1-3
00:06:59.953       http://cunit.sourceforge.net/
00:06:59.953  
00:06:59.953  
00:06:59.953  Suite: pci
00:06:59.953    Test: pci_hook ...[2024-12-17 00:37:49.206833] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 938819 has claimed it
00:07:00.212  EAL: Cannot find device (10000:00:01.0)
00:07:00.212  EAL: Failed to attach device on primary process
00:07:00.212  passed
00:07:00.212  
00:07:00.212  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:00.212                suites      1      1    n/a      0        0
00:07:00.212                 tests      1      1      1      0        0
00:07:00.212               asserts     25     25     25      0      n/a
00:07:00.212  
00:07:00.212  Elapsed time =    0.033 seconds
00:07:00.212  
00:07:00.212  real	0m0.053s
00:07:00.212  user	0m0.014s
00:07:00.212  sys	0m0.039s
00:07:00.212   00:37:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:00.212   00:37:49	-- common/autotest_common.sh@10 -- # set +x
00:07:00.212  ************************************
00:07:00.212  END TEST env_pci
00:07:00.212  ************************************
00:07:00.212   00:37:49	-- env/env.sh@14 -- # argv='-c 0x1 '
00:07:00.212    00:37:49	-- env/env.sh@15 -- # uname
00:07:00.212   00:37:49	-- env/env.sh@15 -- # '[' Linux = Linux ']'
00:07:00.212   00:37:49	-- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:07:00.212   00:37:49	-- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:07:00.212   00:37:49	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:07:00.212   00:37:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:00.212   00:37:49	-- common/autotest_common.sh@10 -- # set +x
00:07:00.212  ************************************
00:07:00.212  START TEST env_dpdk_post_init
00:07:00.212  ************************************
00:07:00.212   00:37:49	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:07:00.212  EAL: Detected CPU lcores: 72
00:07:00.212  EAL: Detected NUMA nodes: 2
00:07:00.212  EAL: Detected shared linkage of DPDK
00:07:00.212  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:07:00.212  EAL: Selected IOVA mode 'VA'
00:07:00.212  EAL: No free 2048 kB hugepages reported on node 1
00:07:00.212  EAL: VFIO support initialized
00:07:00.212  TELEMETRY: No legacy callbacks, legacy socket not created
00:07:00.212  EAL: Using IOMMU type 1 (Type 1)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0)
00:07:00.472  EAL: Ignore mapping IO port bar(1)
00:07:00.472  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0)
00:07:01.041  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1)
00:07:01.300  EAL: Ignore mapping IO port bar(1)
00:07:01.300  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1)
00:07:06.575  EAL: Releasing PCI mapped resource for 0000:5e:00.0
00:07:06.575  EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000
00:07:06.835  Starting DPDK initialization...
00:07:06.835  Starting SPDK post initialization...
00:07:06.835  SPDK NVMe probe
00:07:06.835  Attaching to 0000:5e:00.0
00:07:06.835  Attached to 0000:5e:00.0
00:07:06.835  Cleaning up...
00:07:06.835  
00:07:06.835  real	0m6.725s
00:07:06.835  user	0m5.063s
00:07:06.835  sys	0m0.724s
00:07:06.835   00:37:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:06.835   00:37:56	-- common/autotest_common.sh@10 -- # set +x
00:07:06.835  ************************************
00:07:06.835  END TEST env_dpdk_post_init
00:07:06.835  ************************************
00:07:06.835    00:37:56	-- env/env.sh@26 -- # uname
00:07:06.835   00:37:56	-- env/env.sh@26 -- # '[' Linux = Linux ']'
00:07:06.835   00:37:56	-- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:07:06.835   00:37:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:06.835   00:37:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:06.835   00:37:56	-- common/autotest_common.sh@10 -- # set +x
00:07:06.835  ************************************
00:07:06.835  START TEST env_mem_callbacks
00:07:06.835  ************************************
00:07:06.835   00:37:56	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:07:07.094  EAL: Detected CPU lcores: 72
00:07:07.094  EAL: Detected NUMA nodes: 2
00:07:07.094  EAL: Detected shared linkage of DPDK
00:07:07.094  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:07:07.094  EAL: Selected IOVA mode 'VA'
00:07:07.094  EAL: No free 2048 kB hugepages reported on node 1
00:07:07.095  EAL: VFIO support initialized
00:07:07.095  TELEMETRY: No legacy callbacks, legacy socket not created
00:07:07.095  
00:07:07.095  
00:07:07.095       CUnit - A unit testing framework for C - Version 2.1-3
00:07:07.095       http://cunit.sourceforge.net/
00:07:07.095  
00:07:07.095  
00:07:07.095  Suite: memory
00:07:07.095    Test: test ...
00:07:07.095  register 0x200000200000 2097152
00:07:07.095  malloc 3145728
00:07:07.095  register 0x200000400000 4194304
00:07:07.095  buf 0x200000500000 len 3145728 PASSED
00:07:07.095  malloc 64
00:07:07.095  buf 0x2000004fff40 len 64 PASSED
00:07:07.095  malloc 4194304
00:07:07.095  register 0x200000800000 6291456
00:07:07.095  buf 0x200000a00000 len 4194304 PASSED
00:07:07.095  free 0x200000500000 3145728
00:07:07.095  free 0x2000004fff40 64
00:07:07.095  unregister 0x200000400000 4194304 PASSED
00:07:07.095  free 0x200000a00000 4194304
00:07:07.095  unregister 0x200000800000 6291456 PASSED
00:07:07.095  malloc 8388608
00:07:07.095  register 0x200000400000 10485760
00:07:07.095  buf 0x200000600000 len 8388608 PASSED
00:07:07.095  free 0x200000600000 8388608
00:07:07.095  unregister 0x200000400000 10485760 PASSED
00:07:07.095  passed
00:07:07.095  
00:07:07.095  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:07.095                suites      1      1    n/a      0        0
00:07:07.095                 tests      1      1      1      0        0
00:07:07.095               asserts     15     15     15      0      n/a
00:07:07.095  
00:07:07.095  Elapsed time =    0.008 seconds
00:07:07.095  
00:07:07.095  real	0m0.080s
00:07:07.095  user	0m0.017s
00:07:07.095  sys	0m0.062s
00:07:07.095   00:37:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:07.095   00:37:56	-- common/autotest_common.sh@10 -- # set +x
00:07:07.095  ************************************
00:07:07.095  END TEST env_mem_callbacks
00:07:07.095  ************************************
00:07:07.095  
00:07:07.095  real	0m8.864s
00:07:07.095  user	0m6.283s
00:07:07.095  sys	0m1.664s
00:07:07.095   00:37:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:07.095   00:37:56	-- common/autotest_common.sh@10 -- # set +x
00:07:07.095  ************************************
00:07:07.095  END TEST env
00:07:07.095  ************************************
00:07:07.095   00:37:56	-- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh
00:07:07.095   00:37:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:07.095   00:37:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:07.095   00:37:56	-- common/autotest_common.sh@10 -- # set +x
00:07:07.095  ************************************
00:07:07.095  START TEST rpc
00:07:07.095  ************************************
00:07:07.095   00:37:56	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh
00:07:07.095  * Looking for test storage...
00:07:07.095  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc
00:07:07.095    00:37:56	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:07.095     00:37:56	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:07.095     00:37:56	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:07.354    00:37:56	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:07.354    00:37:56	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:07.354    00:37:56	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:07.354    00:37:56	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:07.354    00:37:56	-- scripts/common.sh@335 -- # IFS=.-:
00:07:07.354    00:37:56	-- scripts/common.sh@335 -- # read -ra ver1
00:07:07.354    00:37:56	-- scripts/common.sh@336 -- # IFS=.-:
00:07:07.354    00:37:56	-- scripts/common.sh@336 -- # read -ra ver2
00:07:07.354    00:37:56	-- scripts/common.sh@337 -- # local 'op=<'
00:07:07.354    00:37:56	-- scripts/common.sh@339 -- # ver1_l=2
00:07:07.354    00:37:56	-- scripts/common.sh@340 -- # ver2_l=1
00:07:07.354    00:37:56	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:07.354    00:37:56	-- scripts/common.sh@343 -- # case "$op" in
00:07:07.354    00:37:56	-- scripts/common.sh@344 -- # : 1
00:07:07.354    00:37:56	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:07.354    00:37:56	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:07.354     00:37:56	-- scripts/common.sh@364 -- # decimal 1
00:07:07.354     00:37:56	-- scripts/common.sh@352 -- # local d=1
00:07:07.354     00:37:56	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:07.354     00:37:56	-- scripts/common.sh@354 -- # echo 1
00:07:07.354    00:37:56	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:07.354     00:37:56	-- scripts/common.sh@365 -- # decimal 2
00:07:07.354     00:37:56	-- scripts/common.sh@352 -- # local d=2
00:07:07.354     00:37:56	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:07.354     00:37:56	-- scripts/common.sh@354 -- # echo 2
00:07:07.354    00:37:56	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:07.354    00:37:56	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:07.354    00:37:56	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:07.354    00:37:56	-- scripts/common.sh@367 -- # return 0
00:07:07.354    00:37:56	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:07.354    00:37:56	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:07.354  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.354  		--rc genhtml_branch_coverage=1
00:07:07.355  		--rc genhtml_function_coverage=1
00:07:07.355  		--rc genhtml_legend=1
00:07:07.355  		--rc geninfo_all_blocks=1
00:07:07.355  		--rc geninfo_unexecuted_blocks=1
00:07:07.355  		
00:07:07.355  		'
00:07:07.355    00:37:56	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:07.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.355  		--rc genhtml_branch_coverage=1
00:07:07.355  		--rc genhtml_function_coverage=1
00:07:07.355  		--rc genhtml_legend=1
00:07:07.355  		--rc geninfo_all_blocks=1
00:07:07.355  		--rc geninfo_unexecuted_blocks=1
00:07:07.355  		
00:07:07.355  		'
00:07:07.355    00:37:56	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:07.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.355  		--rc genhtml_branch_coverage=1
00:07:07.355  		--rc genhtml_function_coverage=1
00:07:07.355  		--rc genhtml_legend=1
00:07:07.355  		--rc geninfo_all_blocks=1
00:07:07.355  		--rc geninfo_unexecuted_blocks=1
00:07:07.355  		
00:07:07.355  		'
00:07:07.355    00:37:56	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:07.355  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:07.355  		--rc genhtml_branch_coverage=1
00:07:07.355  		--rc genhtml_function_coverage=1
00:07:07.355  		--rc genhtml_legend=1
00:07:07.355  		--rc geninfo_all_blocks=1
00:07:07.355  		--rc geninfo_unexecuted_blocks=1
00:07:07.355  		
00:07:07.355  		'
00:07:07.355   00:37:56	-- rpc/rpc.sh@65 -- # spdk_pid=940008
00:07:07.355   00:37:56	-- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:07:07.355   00:37:56	-- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:07:07.355   00:37:56	-- rpc/rpc.sh@67 -- # waitforlisten 940008
00:07:07.355   00:37:56	-- common/autotest_common.sh@829 -- # '[' -z 940008 ']'
00:07:07.355   00:37:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:07.355   00:37:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:07.355   00:37:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:07.355  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:07.355   00:37:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:07.355   00:37:56	-- common/autotest_common.sh@10 -- # set +x
00:07:07.355  [2024-12-17 00:37:56.499145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:07.355  [2024-12-17 00:37:56.499218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940008 ]
00:07:07.355  EAL: No free 2048 kB hugepages reported on node 1
00:07:07.355  [2024-12-17 00:37:56.605552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:07.614  [2024-12-17 00:37:56.651927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:07.614  [2024-12-17 00:37:56.652068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:07:07.614  [2024-12-17 00:37:56.652084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 940008' to capture a snapshot of events at runtime.
00:07:07.614  [2024-12-17 00:37:56.652097] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid940008 for offline analysis/debug.
00:07:07.614  [2024-12-17 00:37:56.652125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:07.614  [2024-12-17 00:37:56.807195] 'OCF_Core' volume operations registered
00:07:07.614  [2024-12-17 00:37:56.809504] 'OCF_Cache' volume operations registered
00:07:07.614  [2024-12-17 00:37:56.812271] 'OCF Composite' volume operations registered
00:07:07.614  [2024-12-17 00:37:56.814660] 'SPDK_block_device' volume operations registered
00:07:08.698   00:37:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:08.698   00:37:57	-- common/autotest_common.sh@862 -- # return 0
00:07:08.698   00:37:57	-- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc
00:07:08.698   00:37:57	-- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc
00:07:08.698   00:37:57	-- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:07:08.698   00:37:57	-- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:07:08.698   00:37:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:08.698   00:37:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:08.698   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.698  ************************************
00:07:08.698  START TEST rpc_integrity
00:07:08.699  ************************************
00:07:08.699   00:37:57	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:07:08.699    00:37:57	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:07:08.699    00:37:57	-- rpc/rpc.sh@13 -- # jq length
00:07:08.699   00:37:57	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:07:08.699    00:37:57	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@15 -- # malloc=Malloc0
00:07:08.699    00:37:57	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@16 -- # bdevs='[
00:07:08.699  {
00:07:08.699  "name": "Malloc0",
00:07:08.699  "aliases": [
00:07:08.699  "99bb7d13-8735-4101-bf49-cb398c6aa03e"
00:07:08.699  ],
00:07:08.699  "product_name": "Malloc disk",
00:07:08.699  "block_size": 512,
00:07:08.699  "num_blocks": 16384,
00:07:08.699  "uuid": "99bb7d13-8735-4101-bf49-cb398c6aa03e",
00:07:08.699  "assigned_rate_limits": {
00:07:08.699  "rw_ios_per_sec": 0,
00:07:08.699  "rw_mbytes_per_sec": 0,
00:07:08.699  "r_mbytes_per_sec": 0,
00:07:08.699  "w_mbytes_per_sec": 0
00:07:08.699  },
00:07:08.699  "claimed": false,
00:07:08.699  "zoned": false,
00:07:08.699  "supported_io_types": {
00:07:08.699  "read": true,
00:07:08.699  "write": true,
00:07:08.699  "unmap": true,
00:07:08.699  "write_zeroes": true,
00:07:08.699  "flush": true,
00:07:08.699  "reset": true,
00:07:08.699  "compare": false,
00:07:08.699  "compare_and_write": false,
00:07:08.699  "abort": true,
00:07:08.699  "nvme_admin": false,
00:07:08.699  "nvme_io": false
00:07:08.699  },
00:07:08.699  "memory_domains": [
00:07:08.699  {
00:07:08.699  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:08.699  "dma_device_type": 2
00:07:08.699  }
00:07:08.699  ],
00:07:08.699  "driver_specific": {}
00:07:08.699  }
00:07:08.699  ]'
00:07:08.699    00:37:57	-- rpc/rpc.sh@17 -- # jq length
00:07:08.699   00:37:57	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:07:08.699   00:37:57	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:07:08.699   00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699  [2024-12-17 00:37:57.606903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:07:08.699  [2024-12-17 00:37:57.606944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:08.699  [2024-12-17 00:37:57.606964] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10de2e0
00:07:08.699  [2024-12-17 00:37:57.606976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:08.699  [2024-12-17 00:37:57.608545] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:08.699  [2024-12-17 00:37:57.608575] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:07:08.699  Passthru0
00:07:08.699   00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699    00:37:57	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@20 -- # bdevs='[
00:07:08.699  {
00:07:08.699  "name": "Malloc0",
00:07:08.699  "aliases": [
00:07:08.699  "99bb7d13-8735-4101-bf49-cb398c6aa03e"
00:07:08.699  ],
00:07:08.699  "product_name": "Malloc disk",
00:07:08.699  "block_size": 512,
00:07:08.699  "num_blocks": 16384,
00:07:08.699  "uuid": "99bb7d13-8735-4101-bf49-cb398c6aa03e",
00:07:08.699  "assigned_rate_limits": {
00:07:08.699  "rw_ios_per_sec": 0,
00:07:08.699  "rw_mbytes_per_sec": 0,
00:07:08.699  "r_mbytes_per_sec": 0,
00:07:08.699  "w_mbytes_per_sec": 0
00:07:08.699  },
00:07:08.699  "claimed": true,
00:07:08.699  "claim_type": "exclusive_write",
00:07:08.699  "zoned": false,
00:07:08.699  "supported_io_types": {
00:07:08.699  "read": true,
00:07:08.699  "write": true,
00:07:08.699  "unmap": true,
00:07:08.699  "write_zeroes": true,
00:07:08.699  "flush": true,
00:07:08.699  "reset": true,
00:07:08.699  "compare": false,
00:07:08.699  "compare_and_write": false,
00:07:08.699  "abort": true,
00:07:08.699  "nvme_admin": false,
00:07:08.699  "nvme_io": false
00:07:08.699  },
00:07:08.699  "memory_domains": [
00:07:08.699  {
00:07:08.699  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:08.699  "dma_device_type": 2
00:07:08.699  }
00:07:08.699  ],
00:07:08.699  "driver_specific": {}
00:07:08.699  },
00:07:08.699  {
00:07:08.699  "name": "Passthru0",
00:07:08.699  "aliases": [
00:07:08.699  "8c599dcc-c316-5821-93cc-c51e71ffd1eb"
00:07:08.699  ],
00:07:08.699  "product_name": "passthru",
00:07:08.699  "block_size": 512,
00:07:08.699  "num_blocks": 16384,
00:07:08.699  "uuid": "8c599dcc-c316-5821-93cc-c51e71ffd1eb",
00:07:08.699  "assigned_rate_limits": {
00:07:08.699  "rw_ios_per_sec": 0,
00:07:08.699  "rw_mbytes_per_sec": 0,
00:07:08.699  "r_mbytes_per_sec": 0,
00:07:08.699  "w_mbytes_per_sec": 0
00:07:08.699  },
00:07:08.699  "claimed": false,
00:07:08.699  "zoned": false,
00:07:08.699  "supported_io_types": {
00:07:08.699  "read": true,
00:07:08.699  "write": true,
00:07:08.699  "unmap": true,
00:07:08.699  "write_zeroes": true,
00:07:08.699  "flush": true,
00:07:08.699  "reset": true,
00:07:08.699  "compare": false,
00:07:08.699  "compare_and_write": false,
00:07:08.699  "abort": true,
00:07:08.699  "nvme_admin": false,
00:07:08.699  "nvme_io": false
00:07:08.699  },
00:07:08.699  "memory_domains": [
00:07:08.699  {
00:07:08.699  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:08.699  "dma_device_type": 2
00:07:08.699  }
00:07:08.699  ],
00:07:08.699  "driver_specific": {
00:07:08.699  "passthru": {
00:07:08.699  "name": "Passthru0",
00:07:08.699  "base_bdev_name": "Malloc0"
00:07:08.699  }
00:07:08.699  }
00:07:08.699  }
00:07:08.699  ]'
00:07:08.699    00:37:57	-- rpc/rpc.sh@21 -- # jq length
00:07:08.699   00:37:57	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:07:08.699   00:37:57	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:07:08.699   00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699   00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:07:08.699   00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699   00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699    00:37:57	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:07:08.699    00:37:57	-- rpc/rpc.sh@26 -- # jq length
00:07:08.699   00:37:57	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:07:08.699  
00:07:08.699  real	0m0.287s
00:07:08.699  user	0m0.178s
00:07:08.699  sys	0m0.049s
00:07:08.699   00:37:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:08.699   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699  ************************************
00:07:08.699  END TEST rpc_integrity
00:07:08.699  ************************************
00:07:08.699   00:37:57	-- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:07:08.699   00:37:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:08.699   00:37:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:08.699   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699  ************************************
00:07:08.699  START TEST rpc_plugins
00:07:08.699  ************************************
00:07:08.699   00:37:57	-- common/autotest_common.sh@1114 -- # rpc_plugins
00:07:08.699    00:37:57	-- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@30 -- # malloc=Malloc1
00:07:08.699    00:37:57	-- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:07:08.699    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.699    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.699    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.699   00:37:57	-- rpc/rpc.sh@31 -- # bdevs='[
00:07:08.699  {
00:07:08.699  "name": "Malloc1",
00:07:08.699  "aliases": [
00:07:08.699  "61cfd32b-6777-408a-b82b-5ef5ddeb63ba"
00:07:08.699  ],
00:07:08.699  "product_name": "Malloc disk",
00:07:08.699  "block_size": 4096,
00:07:08.699  "num_blocks": 256,
00:07:08.699  "uuid": "61cfd32b-6777-408a-b82b-5ef5ddeb63ba",
00:07:08.699  "assigned_rate_limits": {
00:07:08.699  "rw_ios_per_sec": 0,
00:07:08.699  "rw_mbytes_per_sec": 0,
00:07:08.699  "r_mbytes_per_sec": 0,
00:07:08.699  "w_mbytes_per_sec": 0
00:07:08.699  },
00:07:08.699  "claimed": false,
00:07:08.699  "zoned": false,
00:07:08.699  "supported_io_types": {
00:07:08.699  "read": true,
00:07:08.699  "write": true,
00:07:08.699  "unmap": true,
00:07:08.699  "write_zeroes": true,
00:07:08.699  "flush": true,
00:07:08.699  "reset": true,
00:07:08.699  "compare": false,
00:07:08.699  "compare_and_write": false,
00:07:08.699  "abort": true,
00:07:08.699  "nvme_admin": false,
00:07:08.699  "nvme_io": false
00:07:08.699  },
00:07:08.699  "memory_domains": [
00:07:08.699  {
00:07:08.700  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:08.700  "dma_device_type": 2
00:07:08.700  }
00:07:08.700  ],
00:07:08.700  "driver_specific": {}
00:07:08.700  }
00:07:08.700  ]'
00:07:08.700    00:37:57	-- rpc/rpc.sh@32 -- # jq length
00:07:08.700   00:37:57	-- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:07:08.700   00:37:57	-- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:07:08.700   00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.700   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.700   00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.700    00:37:57	-- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:07:08.700    00:37:57	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.700    00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.700    00:37:57	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.700   00:37:57	-- rpc/rpc.sh@35 -- # bdevs='[]'
00:07:08.700    00:37:57	-- rpc/rpc.sh@36 -- # jq length
00:07:08.700   00:37:57	-- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:07:08.700  
00:07:08.700  real	0m0.145s
00:07:08.700  user	0m0.093s
00:07:08.700  sys	0m0.023s
00:07:08.700   00:37:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:08.700   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.700  ************************************
00:07:08.700  END TEST rpc_plugins
00:07:08.700  ************************************
00:07:08.959   00:37:57	-- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:07:08.959   00:37:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:08.959   00:37:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:08.959   00:37:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.959  ************************************
00:07:08.959  START TEST rpc_trace_cmd_test
00:07:08.959  ************************************
00:07:08.959   00:37:58	-- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test
00:07:08.959   00:37:58	-- rpc/rpc.sh@40 -- # local info
00:07:08.959    00:37:58	-- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:07:08.959    00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:08.959    00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:08.959    00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:08.959   00:37:58	-- rpc/rpc.sh@42 -- # info='{
00:07:08.959  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid940008",
00:07:08.959  "tpoint_group_mask": "0x8",
00:07:08.959  "iscsi_conn": {
00:07:08.959  "mask": "0x2",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "scsi": {
00:07:08.959  "mask": "0x4",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "bdev": {
00:07:08.959  "mask": "0x8",
00:07:08.959  "tpoint_mask": "0xffffffffffffffff"
00:07:08.959  },
00:07:08.959  "nvmf_rdma": {
00:07:08.959  "mask": "0x10",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "nvmf_tcp": {
00:07:08.959  "mask": "0x20",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "ftl": {
00:07:08.959  "mask": "0x40",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "blobfs": {
00:07:08.959  "mask": "0x80",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "dsa": {
00:07:08.959  "mask": "0x200",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "thread": {
00:07:08.959  "mask": "0x400",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "nvme_pcie": {
00:07:08.959  "mask": "0x800",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "iaa": {
00:07:08.959  "mask": "0x1000",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "nvme_tcp": {
00:07:08.959  "mask": "0x2000",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  },
00:07:08.959  "bdev_nvme": {
00:07:08.959  "mask": "0x4000",
00:07:08.959  "tpoint_mask": "0x0"
00:07:08.959  }
00:07:08.959  }'
00:07:08.959    00:37:58	-- rpc/rpc.sh@43 -- # jq length
00:07:08.959   00:37:58	-- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']'
00:07:08.959    00:37:58	-- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:07:08.959   00:37:58	-- rpc/rpc.sh@44 -- # '[' true = true ']'
00:07:08.959    00:37:58	-- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:07:08.959   00:37:58	-- rpc/rpc.sh@45 -- # '[' true = true ']'
00:07:08.959    00:37:58	-- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:07:08.959   00:37:58	-- rpc/rpc.sh@46 -- # '[' true = true ']'
00:07:08.959    00:37:58	-- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:07:09.219   00:37:58	-- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:07:09.219  
00:07:09.219  real	0m0.245s
00:07:09.219  user	0m0.195s
00:07:09.219  sys	0m0.042s
00:07:09.219   00:37:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:09.219   00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.219  ************************************
00:07:09.219  END TEST rpc_trace_cmd_test
00:07:09.219  ************************************
00:07:09.219   00:37:58	-- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:07:09.219   00:37:58	-- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:07:09.219   00:37:58	-- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:07:09.219   00:37:58	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:09.219   00:37:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:09.219   00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.219  ************************************
00:07:09.219  START TEST rpc_daemon_integrity
00:07:09.219  ************************************
00:07:09.219   00:37:58	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:07:09.219    00:37:58	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:07:09.219    00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.219    00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.219    00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.219   00:37:58	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:07:09.219    00:37:58	-- rpc/rpc.sh@13 -- # jq length
00:07:09.219   00:37:58	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:07:09.219    00:37:58	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:07:09.219    00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.219    00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.219    00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.219   00:37:58	-- rpc/rpc.sh@15 -- # malloc=Malloc2
00:07:09.219    00:37:58	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:07:09.219    00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.219    00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.219    00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.219   00:37:58	-- rpc/rpc.sh@16 -- # bdevs='[
00:07:09.220  {
00:07:09.220  "name": "Malloc2",
00:07:09.220  "aliases": [
00:07:09.220  "4917e82f-c2ad-4bf9-98f6-b47a3167740b"
00:07:09.220  ],
00:07:09.220  "product_name": "Malloc disk",
00:07:09.220  "block_size": 512,
00:07:09.220  "num_blocks": 16384,
00:07:09.220  "uuid": "4917e82f-c2ad-4bf9-98f6-b47a3167740b",
00:07:09.220  "assigned_rate_limits": {
00:07:09.220  "rw_ios_per_sec": 0,
00:07:09.220  "rw_mbytes_per_sec": 0,
00:07:09.220  "r_mbytes_per_sec": 0,
00:07:09.220  "w_mbytes_per_sec": 0
00:07:09.220  },
00:07:09.220  "claimed": false,
00:07:09.220  "zoned": false,
00:07:09.220  "supported_io_types": {
00:07:09.220  "read": true,
00:07:09.220  "write": true,
00:07:09.220  "unmap": true,
00:07:09.220  "write_zeroes": true,
00:07:09.220  "flush": true,
00:07:09.220  "reset": true,
00:07:09.220  "compare": false,
00:07:09.220  "compare_and_write": false,
00:07:09.220  "abort": true,
00:07:09.220  "nvme_admin": false,
00:07:09.220  "nvme_io": false
00:07:09.220  },
00:07:09.220  "memory_domains": [
00:07:09.220  {
00:07:09.220  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:09.220  "dma_device_type": 2
00:07:09.220  }
00:07:09.220  ],
00:07:09.220  "driver_specific": {}
00:07:09.220  }
00:07:09.220  ]'
00:07:09.220    00:37:58	-- rpc/rpc.sh@17 -- # jq length
00:07:09.220   00:37:58	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:07:09.220   00:37:58	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:07:09.220   00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.220   00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.220  [2024-12-17 00:37:58.429242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:07:09.220  [2024-12-17 00:37:58.429281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:09.220  [2024-12-17 00:37:58.429302] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cf720
00:07:09.220  [2024-12-17 00:37:58.429315] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:09.220  [2024-12-17 00:37:58.430658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:09.220  [2024-12-17 00:37:58.430686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:07:09.220  Passthru0
00:07:09.220   00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.220    00:37:58	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:07:09.220    00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.220    00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.220    00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.220   00:37:58	-- rpc/rpc.sh@20 -- # bdevs='[
00:07:09.220  {
00:07:09.220  "name": "Malloc2",
00:07:09.220  "aliases": [
00:07:09.220  "4917e82f-c2ad-4bf9-98f6-b47a3167740b"
00:07:09.220  ],
00:07:09.220  "product_name": "Malloc disk",
00:07:09.220  "block_size": 512,
00:07:09.220  "num_blocks": 16384,
00:07:09.220  "uuid": "4917e82f-c2ad-4bf9-98f6-b47a3167740b",
00:07:09.220  "assigned_rate_limits": {
00:07:09.220  "rw_ios_per_sec": 0,
00:07:09.220  "rw_mbytes_per_sec": 0,
00:07:09.220  "r_mbytes_per_sec": 0,
00:07:09.220  "w_mbytes_per_sec": 0
00:07:09.220  },
00:07:09.220  "claimed": true,
00:07:09.220  "claim_type": "exclusive_write",
00:07:09.220  "zoned": false,
00:07:09.220  "supported_io_types": {
00:07:09.220  "read": true,
00:07:09.220  "write": true,
00:07:09.220  "unmap": true,
00:07:09.220  "write_zeroes": true,
00:07:09.220  "flush": true,
00:07:09.220  "reset": true,
00:07:09.220  "compare": false,
00:07:09.220  "compare_and_write": false,
00:07:09.220  "abort": true,
00:07:09.220  "nvme_admin": false,
00:07:09.220  "nvme_io": false
00:07:09.220  },
00:07:09.220  "memory_domains": [
00:07:09.220  {
00:07:09.220  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:09.220  "dma_device_type": 2
00:07:09.220  }
00:07:09.220  ],
00:07:09.220  "driver_specific": {}
00:07:09.220  },
00:07:09.220  {
00:07:09.220  "name": "Passthru0",
00:07:09.220  "aliases": [
00:07:09.220  "1f229a26-34bc-5036-aa93-370bcddddfbf"
00:07:09.220  ],
00:07:09.220  "product_name": "passthru",
00:07:09.220  "block_size": 512,
00:07:09.220  "num_blocks": 16384,
00:07:09.220  "uuid": "1f229a26-34bc-5036-aa93-370bcddddfbf",
00:07:09.220  "assigned_rate_limits": {
00:07:09.220  "rw_ios_per_sec": 0,
00:07:09.220  "rw_mbytes_per_sec": 0,
00:07:09.220  "r_mbytes_per_sec": 0,
00:07:09.220  "w_mbytes_per_sec": 0
00:07:09.220  },
00:07:09.220  "claimed": false,
00:07:09.220  "zoned": false,
00:07:09.220  "supported_io_types": {
00:07:09.220  "read": true,
00:07:09.220  "write": true,
00:07:09.220  "unmap": true,
00:07:09.220  "write_zeroes": true,
00:07:09.220  "flush": true,
00:07:09.220  "reset": true,
00:07:09.220  "compare": false,
00:07:09.220  "compare_and_write": false,
00:07:09.220  "abort": true,
00:07:09.220  "nvme_admin": false,
00:07:09.220  "nvme_io": false
00:07:09.220  },
00:07:09.220  "memory_domains": [
00:07:09.220  {
00:07:09.220  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:09.220  "dma_device_type": 2
00:07:09.220  }
00:07:09.220  ],
00:07:09.220  "driver_specific": {
00:07:09.220  "passthru": {
00:07:09.220  "name": "Passthru0",
00:07:09.220  "base_bdev_name": "Malloc2"
00:07:09.220  }
00:07:09.220  }
00:07:09.220  }
00:07:09.220  ]'
00:07:09.220    00:37:58	-- rpc/rpc.sh@21 -- # jq length
00:07:09.480   00:37:58	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:07:09.480   00:37:58	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:07:09.480   00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.480   00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.480   00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.480   00:37:58	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:07:09.480   00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.480   00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.480   00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.480    00:37:58	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:07:09.480    00:37:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:09.480    00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.480    00:37:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:09.480   00:37:58	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:07:09.480    00:37:58	-- rpc/rpc.sh@26 -- # jq length
00:07:09.480   00:37:58	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:07:09.480  
00:07:09.480  real	0m0.292s
00:07:09.480  user	0m0.192s
00:07:09.480  sys	0m0.044s
00:07:09.480   00:37:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:09.480   00:37:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.480  ************************************
00:07:09.480  END TEST rpc_daemon_integrity
00:07:09.480  ************************************
00:07:09.480   00:37:58	-- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:07:09.480   00:37:58	-- rpc/rpc.sh@84 -- # killprocess 940008
00:07:09.480   00:37:58	-- common/autotest_common.sh@936 -- # '[' -z 940008 ']'
00:07:09.480   00:37:58	-- common/autotest_common.sh@940 -- # kill -0 940008
00:07:09.480    00:37:58	-- common/autotest_common.sh@941 -- # uname
00:07:09.480   00:37:58	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:09.480    00:37:58	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 940008
00:07:09.480   00:37:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:09.480   00:37:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:09.480   00:37:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 940008'
00:07:09.480  killing process with pid 940008
00:07:09.480   00:37:58	-- common/autotest_common.sh@955 -- # kill 940008
00:07:09.480   00:37:58	-- common/autotest_common.sh@960 -- # wait 940008
00:07:10.049  
00:07:10.049  real	0m2.963s
00:07:10.049  user	0m3.629s
00:07:10.049  sys	0m0.933s
00:07:10.049   00:37:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:10.049   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.049  ************************************
00:07:10.049  END TEST rpc
00:07:10.049  ************************************
00:07:10.049   00:37:59	-- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:07:10.049   00:37:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:10.049   00:37:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:10.049   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.049  ************************************
00:07:10.049  START TEST rpc_client
00:07:10.049  ************************************
00:07:10.049   00:37:59	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:07:10.309  * Looking for test storage...
00:07:10.309  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client
00:07:10.309    00:37:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:10.309     00:37:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:10.309     00:37:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:10.309    00:37:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:10.309    00:37:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:10.309    00:37:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:10.309    00:37:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:10.309    00:37:59	-- scripts/common.sh@335 -- # IFS=.-:
00:07:10.309    00:37:59	-- scripts/common.sh@335 -- # read -ra ver1
00:07:10.309    00:37:59	-- scripts/common.sh@336 -- # IFS=.-:
00:07:10.309    00:37:59	-- scripts/common.sh@336 -- # read -ra ver2
00:07:10.309    00:37:59	-- scripts/common.sh@337 -- # local 'op=<'
00:07:10.309    00:37:59	-- scripts/common.sh@339 -- # ver1_l=2
00:07:10.309    00:37:59	-- scripts/common.sh@340 -- # ver2_l=1
00:07:10.309    00:37:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:10.309    00:37:59	-- scripts/common.sh@343 -- # case "$op" in
00:07:10.309    00:37:59	-- scripts/common.sh@344 -- # : 1
00:07:10.309    00:37:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:10.309    00:37:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:10.309     00:37:59	-- scripts/common.sh@364 -- # decimal 1
00:07:10.309     00:37:59	-- scripts/common.sh@352 -- # local d=1
00:07:10.309     00:37:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:10.309     00:37:59	-- scripts/common.sh@354 -- # echo 1
00:07:10.309    00:37:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:10.309     00:37:59	-- scripts/common.sh@365 -- # decimal 2
00:07:10.309     00:37:59	-- scripts/common.sh@352 -- # local d=2
00:07:10.309     00:37:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:10.309     00:37:59	-- scripts/common.sh@354 -- # echo 2
00:07:10.309    00:37:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:10.309    00:37:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:10.309    00:37:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:10.309    00:37:59	-- scripts/common.sh@367 -- # return 0
00:07:10.309    00:37:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:10.309    00:37:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:10.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.309  		--rc genhtml_branch_coverage=1
00:07:10.309  		--rc genhtml_function_coverage=1
00:07:10.309  		--rc genhtml_legend=1
00:07:10.309  		--rc geninfo_all_blocks=1
00:07:10.309  		--rc geninfo_unexecuted_blocks=1
00:07:10.309  		
00:07:10.309  		'
00:07:10.309    00:37:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:10.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.309  		--rc genhtml_branch_coverage=1
00:07:10.309  		--rc genhtml_function_coverage=1
00:07:10.309  		--rc genhtml_legend=1
00:07:10.309  		--rc geninfo_all_blocks=1
00:07:10.309  		--rc geninfo_unexecuted_blocks=1
00:07:10.309  		
00:07:10.309  		'
00:07:10.309    00:37:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:10.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.309  		--rc genhtml_branch_coverage=1
00:07:10.309  		--rc genhtml_function_coverage=1
00:07:10.309  		--rc genhtml_legend=1
00:07:10.309  		--rc geninfo_all_blocks=1
00:07:10.309  		--rc geninfo_unexecuted_blocks=1
00:07:10.309  		
00:07:10.309  		'
00:07:10.309    00:37:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:10.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.309  		--rc genhtml_branch_coverage=1
00:07:10.309  		--rc genhtml_function_coverage=1
00:07:10.309  		--rc genhtml_legend=1
00:07:10.309  		--rc geninfo_all_blocks=1
00:07:10.309  		--rc geninfo_unexecuted_blocks=1
00:07:10.309  		
00:07:10.309  		'
00:07:10.309   00:37:59	-- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:07:10.309  OK
00:07:10.309   00:37:59	-- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:07:10.309  
00:07:10.309  real	0m0.224s
00:07:10.309  user	0m0.130s
00:07:10.309  sys	0m0.111s
00:07:10.309   00:37:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:10.309   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.309  ************************************
00:07:10.309  END TEST rpc_client
00:07:10.309  ************************************
00:07:10.309   00:37:59	-- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh
00:07:10.309   00:37:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:10.309   00:37:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:10.309   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.309  ************************************
00:07:10.309  START TEST json_config
00:07:10.309  ************************************
00:07:10.309   00:37:59	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh
00:07:10.569    00:37:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:10.569     00:37:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:10.569     00:37:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:10.569    00:37:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:10.569    00:37:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:10.569    00:37:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:10.569    00:37:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:10.569    00:37:59	-- scripts/common.sh@335 -- # IFS=.-:
00:07:10.569    00:37:59	-- scripts/common.sh@335 -- # read -ra ver1
00:07:10.569    00:37:59	-- scripts/common.sh@336 -- # IFS=.-:
00:07:10.569    00:37:59	-- scripts/common.sh@336 -- # read -ra ver2
00:07:10.569    00:37:59	-- scripts/common.sh@337 -- # local 'op=<'
00:07:10.569    00:37:59	-- scripts/common.sh@339 -- # ver1_l=2
00:07:10.569    00:37:59	-- scripts/common.sh@340 -- # ver2_l=1
00:07:10.569    00:37:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:10.569    00:37:59	-- scripts/common.sh@343 -- # case "$op" in
00:07:10.569    00:37:59	-- scripts/common.sh@344 -- # : 1
00:07:10.569    00:37:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:10.569    00:37:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:10.569     00:37:59	-- scripts/common.sh@364 -- # decimal 1
00:07:10.569     00:37:59	-- scripts/common.sh@352 -- # local d=1
00:07:10.569     00:37:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:10.569     00:37:59	-- scripts/common.sh@354 -- # echo 1
00:07:10.569    00:37:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:10.569     00:37:59	-- scripts/common.sh@365 -- # decimal 2
00:07:10.569     00:37:59	-- scripts/common.sh@352 -- # local d=2
00:07:10.569     00:37:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:10.569     00:37:59	-- scripts/common.sh@354 -- # echo 2
00:07:10.569    00:37:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:10.569    00:37:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:10.569    00:37:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:10.569    00:37:59	-- scripts/common.sh@367 -- # return 0
00:07:10.569    00:37:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:10.569    00:37:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:10.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.569  		--rc genhtml_branch_coverage=1
00:07:10.569  		--rc genhtml_function_coverage=1
00:07:10.569  		--rc genhtml_legend=1
00:07:10.569  		--rc geninfo_all_blocks=1
00:07:10.569  		--rc geninfo_unexecuted_blocks=1
00:07:10.569  		
00:07:10.569  		'
00:07:10.569    00:37:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:10.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.569  		--rc genhtml_branch_coverage=1
00:07:10.569  		--rc genhtml_function_coverage=1
00:07:10.569  		--rc genhtml_legend=1
00:07:10.569  		--rc geninfo_all_blocks=1
00:07:10.569  		--rc geninfo_unexecuted_blocks=1
00:07:10.569  		
00:07:10.569  		'
00:07:10.569    00:37:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:10.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.569  		--rc genhtml_branch_coverage=1
00:07:10.569  		--rc genhtml_function_coverage=1
00:07:10.569  		--rc genhtml_legend=1
00:07:10.569  		--rc geninfo_all_blocks=1
00:07:10.569  		--rc geninfo_unexecuted_blocks=1
00:07:10.569  		
00:07:10.569  		'
00:07:10.569    00:37:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:10.569  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.569  		--rc genhtml_branch_coverage=1
00:07:10.569  		--rc genhtml_function_coverage=1
00:07:10.569  		--rc genhtml_legend=1
00:07:10.569  		--rc geninfo_all_blocks=1
00:07:10.569  		--rc geninfo_unexecuted_blocks=1
00:07:10.569  		
00:07:10.569  		'
00:07:10.569   00:37:59	-- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh
00:07:10.569     00:37:59	-- nvmf/common.sh@7 -- # uname -s
00:07:10.569    00:37:59	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:10.569    00:37:59	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:10.569    00:37:59	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:10.569    00:37:59	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:10.569    00:37:59	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:10.569    00:37:59	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:10.569    00:37:59	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:10.569    00:37:59	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:10.569    00:37:59	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:10.569     00:37:59	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:10.569    00:37:59	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e
00:07:10.569    00:37:59	-- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e
00:07:10.569    00:37:59	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:10.569    00:37:59	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:10.569    00:37:59	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:10.569    00:37:59	-- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:07:10.569     00:37:59	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:10.569     00:37:59	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:10.569     00:37:59	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:10.569      00:37:59	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.569      00:37:59	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.569      00:37:59	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.569      00:37:59	-- paths/export.sh@5 -- # export PATH
00:07:10.569      00:37:59	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.569    00:37:59	-- nvmf/common.sh@46 -- # : 0
00:07:10.569    00:37:59	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:10.569    00:37:59	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:10.569    00:37:59	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:10.569    00:37:59	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:10.569    00:37:59	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:10.569    00:37:59	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:10.569    00:37:59	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:10.569    00:37:59	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:10.569   00:37:59	-- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]]
00:07:10.569   00:37:59	-- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]]
00:07:10.569   00:37:59	-- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]]
00:07:10.569   00:37:59	-- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:07:10.569   00:37:59	-- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:07:10.569  WARNING: No tests are enabled so not running JSON configuration tests
00:07:10.569   00:37:59	-- json_config/json_config.sh@27 -- # exit 0
00:07:10.569  
00:07:10.569  real	0m0.207s
00:07:10.569  user	0m0.137s
00:07:10.569  sys	0m0.079s
00:07:10.569   00:37:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:10.569   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.569  ************************************
00:07:10.569  END TEST json_config
00:07:10.569  ************************************
00:07:10.569   00:37:59	-- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:07:10.569   00:37:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:10.569   00:37:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:10.569   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.569  ************************************
00:07:10.570  START TEST json_config_extra_key
00:07:10.570  ************************************
00:07:10.570   00:37:59	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:07:10.829    00:37:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:10.829     00:37:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:10.829     00:37:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:10.829    00:37:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:10.829    00:37:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:10.829    00:37:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:10.829    00:37:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:10.829    00:37:59	-- scripts/common.sh@335 -- # IFS=.-:
00:07:10.829    00:37:59	-- scripts/common.sh@335 -- # read -ra ver1
00:07:10.829    00:37:59	-- scripts/common.sh@336 -- # IFS=.-:
00:07:10.830    00:37:59	-- scripts/common.sh@336 -- # read -ra ver2
00:07:10.830    00:37:59	-- scripts/common.sh@337 -- # local 'op=<'
00:07:10.830    00:37:59	-- scripts/common.sh@339 -- # ver1_l=2
00:07:10.830    00:37:59	-- scripts/common.sh@340 -- # ver2_l=1
00:07:10.830    00:37:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:10.830    00:37:59	-- scripts/common.sh@343 -- # case "$op" in
00:07:10.830    00:37:59	-- scripts/common.sh@344 -- # : 1
00:07:10.830    00:37:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:10.830    00:37:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:10.830     00:37:59	-- scripts/common.sh@364 -- # decimal 1
00:07:10.830     00:37:59	-- scripts/common.sh@352 -- # local d=1
00:07:10.830     00:37:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:10.830     00:37:59	-- scripts/common.sh@354 -- # echo 1
00:07:10.830    00:37:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:10.830     00:37:59	-- scripts/common.sh@365 -- # decimal 2
00:07:10.830     00:37:59	-- scripts/common.sh@352 -- # local d=2
00:07:10.830     00:37:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:10.830     00:37:59	-- scripts/common.sh@354 -- # echo 2
00:07:10.830    00:37:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:10.830    00:37:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:10.830    00:37:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:10.830    00:37:59	-- scripts/common.sh@367 -- # return 0
00:07:10.830    00:37:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:10.830    00:37:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:10.830  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.830  		--rc genhtml_branch_coverage=1
00:07:10.830  		--rc genhtml_function_coverage=1
00:07:10.830  		--rc genhtml_legend=1
00:07:10.830  		--rc geninfo_all_blocks=1
00:07:10.830  		--rc geninfo_unexecuted_blocks=1
00:07:10.830  		
00:07:10.830  		'
00:07:10.830    00:37:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:10.830  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.830  		--rc genhtml_branch_coverage=1
00:07:10.830  		--rc genhtml_function_coverage=1
00:07:10.830  		--rc genhtml_legend=1
00:07:10.830  		--rc geninfo_all_blocks=1
00:07:10.830  		--rc geninfo_unexecuted_blocks=1
00:07:10.830  		
00:07:10.830  		'
00:07:10.830    00:37:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:10.830  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.830  		--rc genhtml_branch_coverage=1
00:07:10.830  		--rc genhtml_function_coverage=1
00:07:10.830  		--rc genhtml_legend=1
00:07:10.830  		--rc geninfo_all_blocks=1
00:07:10.830  		--rc geninfo_unexecuted_blocks=1
00:07:10.830  		
00:07:10.830  		'
00:07:10.830    00:37:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:10.830  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:10.830  		--rc genhtml_branch_coverage=1
00:07:10.830  		--rc genhtml_function_coverage=1
00:07:10.830  		--rc genhtml_legend=1
00:07:10.830  		--rc geninfo_all_blocks=1
00:07:10.830  		--rc geninfo_unexecuted_blocks=1
00:07:10.830  		
00:07:10.830  		'
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh
00:07:10.830     00:37:59	-- nvmf/common.sh@7 -- # uname -s
00:07:10.830    00:37:59	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:10.830    00:37:59	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:10.830    00:37:59	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:10.830    00:37:59	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:10.830    00:37:59	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:10.830    00:37:59	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:10.830    00:37:59	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:10.830    00:37:59	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:10.830    00:37:59	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:10.830     00:37:59	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:10.830    00:37:59	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e
00:07:10.830    00:37:59	-- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e
00:07:10.830    00:37:59	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:10.830    00:37:59	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:10.830    00:37:59	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:10.830    00:37:59	-- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:07:10.830     00:37:59	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:10.830     00:37:59	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:10.830     00:37:59	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:10.830      00:37:59	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.830      00:37:59	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.830      00:37:59	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.830      00:37:59	-- paths/export.sh@5 -- # export PATH
00:07:10.830      00:37:59	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:10.830    00:37:59	-- nvmf/common.sh@46 -- # : 0
00:07:10.830    00:37:59	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:07:10.830    00:37:59	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:07:10.830    00:37:59	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:07:10.830    00:37:59	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:10.830    00:37:59	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:10.830    00:37:59	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:07:10.830    00:37:59	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:07:10.830    00:37:59	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='')
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024')
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@18 -- # declare -A app_params
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json')
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...'
00:07:10.830  INFO: launching applications...
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@24 -- # local app=target
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@25 -- # shift
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]]
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]]
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=940640
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...'
00:07:10.830  Waiting for target to run...
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@34 -- # waitforlisten 940640 /var/tmp/spdk_tgt.sock
00:07:10.830   00:37:59	-- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json
00:07:10.830   00:37:59	-- common/autotest_common.sh@829 -- # '[' -z 940640 ']'
00:07:10.830   00:37:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:10.830   00:37:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:10.830   00:37:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:10.830  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:10.830   00:37:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:10.830   00:37:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.830  [2024-12-17 00:38:00.046305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:10.830  [2024-12-17 00:38:00.046392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940640 ]
00:07:11.089  EAL: No free 2048 kB hugepages reported on node 1
00:07:11.348  [2024-12-17 00:38:00.572536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:11.607  [2024-12-17 00:38:00.611531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:11.607  [2024-12-17 00:38:00.611676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:11.607  [2024-12-17 00:38:00.651591] 'OCF_Core' volume operations registered
00:07:11.607  [2024-12-17 00:38:00.652751] 'OCF_Cache' volume operations registered
00:07:11.607  [2024-12-17 00:38:00.653949] 'OCF Composite' volume operations registered
00:07:11.607  [2024-12-17 00:38:00.655085] 'SPDK_block_device' volume operations registered
00:07:11.866   00:38:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:11.866   00:38:00	-- common/autotest_common.sh@862 -- # return 0
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@35 -- # echo ''
00:07:11.866  
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...'
00:07:11.866  INFO: shutting down applications...
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@40 -- # local app=target
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]]
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@44 -- # [[ -n 940640 ]]
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 940640
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@49 -- # (( i = 0 ))
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@50 -- # kill -0 940640
00:07:11.866   00:38:00	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:12.434   00:38:01	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:12.434   00:38:01	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:12.434   00:38:01	-- json_config/json_config_extra_key.sh@50 -- # kill -0 940640
00:07:12.434   00:38:01	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@50 -- # kill -0 940640
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]=
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@52 -- # break
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]]
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done'
00:07:13.003  SPDK target shutdown done
00:07:13.003   00:38:01	-- json_config/json_config_extra_key.sh@82 -- # echo Success
00:07:13.003  Success
00:07:13.003  
00:07:13.003  real	0m2.178s
00:07:13.003  user	0m1.382s
00:07:13.003  sys	0m0.727s
00:07:13.003   00:38:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:13.003   00:38:01	-- common/autotest_common.sh@10 -- # set +x
00:07:13.003  ************************************
00:07:13.003  END TEST json_config_extra_key
00:07:13.003  ************************************
00:07:13.003   00:38:02	-- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:13.003   00:38:02	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:13.003   00:38:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:13.003   00:38:02	-- common/autotest_common.sh@10 -- # set +x
00:07:13.003  ************************************
00:07:13.003  START TEST alias_rpc
00:07:13.003  ************************************
00:07:13.003   00:38:02	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:13.003  * Looking for test storage...
00:07:13.003  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc
00:07:13.003    00:38:02	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:13.003     00:38:02	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:13.003     00:38:02	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:13.003    00:38:02	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:13.003    00:38:02	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:13.003    00:38:02	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:13.003    00:38:02	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:13.003    00:38:02	-- scripts/common.sh@335 -- # IFS=.-:
00:07:13.003    00:38:02	-- scripts/common.sh@335 -- # read -ra ver1
00:07:13.003    00:38:02	-- scripts/common.sh@336 -- # IFS=.-:
00:07:13.003    00:38:02	-- scripts/common.sh@336 -- # read -ra ver2
00:07:13.003    00:38:02	-- scripts/common.sh@337 -- # local 'op=<'
00:07:13.003    00:38:02	-- scripts/common.sh@339 -- # ver1_l=2
00:07:13.003    00:38:02	-- scripts/common.sh@340 -- # ver2_l=1
00:07:13.003    00:38:02	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:13.003    00:38:02	-- scripts/common.sh@343 -- # case "$op" in
00:07:13.003    00:38:02	-- scripts/common.sh@344 -- # : 1
00:07:13.003    00:38:02	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:13.003    00:38:02	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:13.003     00:38:02	-- scripts/common.sh@364 -- # decimal 1
00:07:13.003     00:38:02	-- scripts/common.sh@352 -- # local d=1
00:07:13.003     00:38:02	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:13.003     00:38:02	-- scripts/common.sh@354 -- # echo 1
00:07:13.003    00:38:02	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:13.003     00:38:02	-- scripts/common.sh@365 -- # decimal 2
00:07:13.003     00:38:02	-- scripts/common.sh@352 -- # local d=2
00:07:13.003     00:38:02	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:13.003     00:38:02	-- scripts/common.sh@354 -- # echo 2
00:07:13.003    00:38:02	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:13.003    00:38:02	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:13.003    00:38:02	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:13.003    00:38:02	-- scripts/common.sh@367 -- # return 0
00:07:13.003    00:38:02	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:13.003    00:38:02	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:13.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.003  		--rc genhtml_branch_coverage=1
00:07:13.003  		--rc genhtml_function_coverage=1
00:07:13.003  		--rc genhtml_legend=1
00:07:13.003  		--rc geninfo_all_blocks=1
00:07:13.003  		--rc geninfo_unexecuted_blocks=1
00:07:13.003  		
00:07:13.003  		'
00:07:13.003    00:38:02	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:13.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.003  		--rc genhtml_branch_coverage=1
00:07:13.003  		--rc genhtml_function_coverage=1
00:07:13.003  		--rc genhtml_legend=1
00:07:13.003  		--rc geninfo_all_blocks=1
00:07:13.003  		--rc geninfo_unexecuted_blocks=1
00:07:13.003  		
00:07:13.003  		'
00:07:13.003    00:38:02	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:13.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.003  		--rc genhtml_branch_coverage=1
00:07:13.003  		--rc genhtml_function_coverage=1
00:07:13.003  		--rc genhtml_legend=1
00:07:13.003  		--rc geninfo_all_blocks=1
00:07:13.003  		--rc geninfo_unexecuted_blocks=1
00:07:13.003  		
00:07:13.003  		'
00:07:13.003    00:38:02	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:13.003  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:13.003  		--rc genhtml_branch_coverage=1
00:07:13.003  		--rc genhtml_function_coverage=1
00:07:13.003  		--rc genhtml_legend=1
00:07:13.003  		--rc geninfo_all_blocks=1
00:07:13.003  		--rc geninfo_unexecuted_blocks=1
00:07:13.003  		
00:07:13.003  		'
00:07:13.003   00:38:02	-- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:13.003   00:38:02	-- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=941049
00:07:13.003   00:38:02	-- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:07:13.003   00:38:02	-- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 941049
00:07:13.003   00:38:02	-- common/autotest_common.sh@829 -- # '[' -z 941049 ']'
00:07:13.003   00:38:02	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:13.003   00:38:02	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:13.003   00:38:02	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:13.003  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:13.003   00:38:02	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:13.003   00:38:02	-- common/autotest_common.sh@10 -- # set +x
00:07:13.004  [2024-12-17 00:38:02.255876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:13.004  [2024-12-17 00:38:02.255961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941049 ]
00:07:13.263  EAL: No free 2048 kB hugepages reported on node 1
00:07:13.263  [2024-12-17 00:38:02.363848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:13.263  [2024-12-17 00:38:02.410441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:13.263  [2024-12-17 00:38:02.410604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:13.522  [2024-12-17 00:38:02.570405] 'OCF_Core' volume operations registered
00:07:13.522  [2024-12-17 00:38:02.572751] 'OCF_Cache' volume operations registered
00:07:13.522  [2024-12-17 00:38:02.575607] 'OCF Composite' volume operations registered
00:07:13.522  [2024-12-17 00:38:02.578016] 'SPDK_block_device' volume operations registered
00:07:14.091   00:38:03	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:14.091   00:38:03	-- common/autotest_common.sh@862 -- # return 0
00:07:14.091   00:38:03	-- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py load_config -i
00:07:14.351   00:38:03	-- alias_rpc/alias_rpc.sh@19 -- # killprocess 941049
00:07:14.351   00:38:03	-- common/autotest_common.sh@936 -- # '[' -z 941049 ']'
00:07:14.351   00:38:03	-- common/autotest_common.sh@940 -- # kill -0 941049
00:07:14.351    00:38:03	-- common/autotest_common.sh@941 -- # uname
00:07:14.351   00:38:03	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:14.351    00:38:03	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 941049
00:07:14.351   00:38:03	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:14.351   00:38:03	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:14.351   00:38:03	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 941049'
00:07:14.351  killing process with pid 941049
00:07:14.351   00:38:03	-- common/autotest_common.sh@955 -- # kill 941049
00:07:14.351   00:38:03	-- common/autotest_common.sh@960 -- # wait 941049
00:07:14.921  
00:07:14.921  real	0m2.042s
00:07:14.921  user	0m2.190s
00:07:14.921  sys	0m0.634s
00:07:14.921   00:38:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:14.921   00:38:04	-- common/autotest_common.sh@10 -- # set +x
00:07:14.921  ************************************
00:07:14.921  END TEST alias_rpc
00:07:14.921  ************************************
00:07:14.921   00:38:04	-- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]]
00:07:14.921   00:38:04	-- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh
00:07:14.921   00:38:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:14.921   00:38:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:14.921   00:38:04	-- common/autotest_common.sh@10 -- # set +x
00:07:14.921  ************************************
00:07:14.921  START TEST spdkcli_tcp
00:07:14.921  ************************************
00:07:14.921   00:38:04	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh
00:07:15.180  * Looking for test storage...
00:07:15.180  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli
00:07:15.180    00:38:04	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:15.180     00:38:04	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:15.180     00:38:04	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:15.180    00:38:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:15.180    00:38:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:15.180    00:38:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:15.180    00:38:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:15.180    00:38:04	-- scripts/common.sh@335 -- # IFS=.-:
00:07:15.180    00:38:04	-- scripts/common.sh@335 -- # read -ra ver1
00:07:15.180    00:38:04	-- scripts/common.sh@336 -- # IFS=.-:
00:07:15.180    00:38:04	-- scripts/common.sh@336 -- # read -ra ver2
00:07:15.180    00:38:04	-- scripts/common.sh@337 -- # local 'op=<'
00:07:15.180    00:38:04	-- scripts/common.sh@339 -- # ver1_l=2
00:07:15.180    00:38:04	-- scripts/common.sh@340 -- # ver2_l=1
00:07:15.180    00:38:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:15.180    00:38:04	-- scripts/common.sh@343 -- # case "$op" in
00:07:15.180    00:38:04	-- scripts/common.sh@344 -- # : 1
00:07:15.180    00:38:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:15.180    00:38:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:15.180     00:38:04	-- scripts/common.sh@364 -- # decimal 1
00:07:15.180     00:38:04	-- scripts/common.sh@352 -- # local d=1
00:07:15.180     00:38:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:15.180     00:38:04	-- scripts/common.sh@354 -- # echo 1
00:07:15.180    00:38:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:15.180     00:38:04	-- scripts/common.sh@365 -- # decimal 2
00:07:15.180     00:38:04	-- scripts/common.sh@352 -- # local d=2
00:07:15.180     00:38:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:15.180     00:38:04	-- scripts/common.sh@354 -- # echo 2
00:07:15.180    00:38:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:15.180    00:38:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:15.180    00:38:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:15.180    00:38:04	-- scripts/common.sh@367 -- # return 0
00:07:15.180    00:38:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:15.180    00:38:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:15.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.180  		--rc genhtml_branch_coverage=1
00:07:15.180  		--rc genhtml_function_coverage=1
00:07:15.180  		--rc genhtml_legend=1
00:07:15.180  		--rc geninfo_all_blocks=1
00:07:15.180  		--rc geninfo_unexecuted_blocks=1
00:07:15.180  		
00:07:15.180  		'
00:07:15.180    00:38:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:15.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.180  		--rc genhtml_branch_coverage=1
00:07:15.180  		--rc genhtml_function_coverage=1
00:07:15.180  		--rc genhtml_legend=1
00:07:15.180  		--rc geninfo_all_blocks=1
00:07:15.180  		--rc geninfo_unexecuted_blocks=1
00:07:15.180  		
00:07:15.180  		'
00:07:15.180    00:38:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:15.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.180  		--rc genhtml_branch_coverage=1
00:07:15.180  		--rc genhtml_function_coverage=1
00:07:15.180  		--rc genhtml_legend=1
00:07:15.180  		--rc geninfo_all_blocks=1
00:07:15.180  		--rc geninfo_unexecuted_blocks=1
00:07:15.180  		
00:07:15.180  		'
00:07:15.180    00:38:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:15.180  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:15.180  		--rc genhtml_branch_coverage=1
00:07:15.180  		--rc genhtml_function_coverage=1
00:07:15.180  		--rc genhtml_legend=1
00:07:15.180  		--rc geninfo_all_blocks=1
00:07:15.180  		--rc geninfo_unexecuted_blocks=1
00:07:15.180  		
00:07:15.180  		'
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/common.sh
00:07:15.180    00:38:04	-- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:07:15.180    00:38:04	-- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/clear_config.py
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@19 -- # PORT=9998
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:07:15.180   00:38:04	-- common/autotest_common.sh@722 -- # xtrace_disable
00:07:15.180   00:38:04	-- common/autotest_common.sh@10 -- # set +x
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=941381
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@27 -- # waitforlisten 941381
00:07:15.180   00:38:04	-- common/autotest_common.sh@829 -- # '[' -z 941381 ']'
00:07:15.180   00:38:04	-- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:07:15.180   00:38:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:15.180   00:38:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:15.180   00:38:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:15.180  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:15.180   00:38:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:15.180   00:38:04	-- common/autotest_common.sh@10 -- # set +x
00:07:15.180  [2024-12-17 00:38:04.362230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:15.180  [2024-12-17 00:38:04.362296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941381 ]
00:07:15.180  EAL: No free 2048 kB hugepages reported on node 1
00:07:15.439  [2024-12-17 00:38:04.455985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:15.439  [2024-12-17 00:38:04.504233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:15.439  [2024-12-17 00:38:04.504445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:15.439  [2024-12-17 00:38:04.504449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:15.439  [2024-12-17 00:38:04.662767] 'OCF_Core' volume operations registered
00:07:15.439  [2024-12-17 00:38:04.665063] 'OCF_Cache' volume operations registered
00:07:15.439  [2024-12-17 00:38:04.667835] 'OCF Composite' volume operations registered
00:07:15.439  [2024-12-17 00:38:04.670168] 'SPDK_block_device' volume operations registered
00:07:16.384   00:38:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:16.384   00:38:05	-- common/autotest_common.sh@862 -- # return 0
00:07:16.384   00:38:05	-- spdkcli/tcp.sh@31 -- # socat_pid=941477
00:07:16.384   00:38:05	-- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:07:16.384   00:38:05	-- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:07:16.384  [
00:07:16.384    "bdev_malloc_delete",
00:07:16.384    "bdev_malloc_create",
00:07:16.384    "bdev_null_resize",
00:07:16.384    "bdev_null_delete",
00:07:16.384    "bdev_null_create",
00:07:16.384    "bdev_nvme_cuse_unregister",
00:07:16.384    "bdev_nvme_cuse_register",
00:07:16.384    "bdev_opal_new_user",
00:07:16.384    "bdev_opal_set_lock_state",
00:07:16.384    "bdev_opal_delete",
00:07:16.384    "bdev_opal_get_info",
00:07:16.384    "bdev_opal_create",
00:07:16.384    "bdev_nvme_opal_revert",
00:07:16.384    "bdev_nvme_opal_init",
00:07:16.384    "bdev_nvme_send_cmd",
00:07:16.384    "bdev_nvme_get_path_iostat",
00:07:16.384    "bdev_nvme_get_mdns_discovery_info",
00:07:16.384    "bdev_nvme_stop_mdns_discovery",
00:07:16.384    "bdev_nvme_start_mdns_discovery",
00:07:16.384    "bdev_nvme_set_multipath_policy",
00:07:16.384    "bdev_nvme_set_preferred_path",
00:07:16.384    "bdev_nvme_get_io_paths",
00:07:16.384    "bdev_nvme_remove_error_injection",
00:07:16.384    "bdev_nvme_add_error_injection",
00:07:16.384    "bdev_nvme_get_discovery_info",
00:07:16.384    "bdev_nvme_stop_discovery",
00:07:16.384    "bdev_nvme_start_discovery",
00:07:16.384    "bdev_nvme_get_controller_health_info",
00:07:16.384    "bdev_nvme_disable_controller",
00:07:16.384    "bdev_nvme_enable_controller",
00:07:16.384    "bdev_nvme_reset_controller",
00:07:16.384    "bdev_nvme_get_transport_statistics",
00:07:16.384    "bdev_nvme_apply_firmware",
00:07:16.384    "bdev_nvme_detach_controller",
00:07:16.384    "bdev_nvme_get_controllers",
00:07:16.384    "bdev_nvme_attach_controller",
00:07:16.384    "bdev_nvme_set_hotplug",
00:07:16.384    "bdev_nvme_set_options",
00:07:16.384    "bdev_passthru_delete",
00:07:16.384    "bdev_passthru_create",
00:07:16.384    "bdev_lvol_grow_lvstore",
00:07:16.384    "bdev_lvol_get_lvols",
00:07:16.384    "bdev_lvol_get_lvstores",
00:07:16.384    "bdev_lvol_delete",
00:07:16.384    "bdev_lvol_set_read_only",
00:07:16.384    "bdev_lvol_resize",
00:07:16.384    "bdev_lvol_decouple_parent",
00:07:16.384    "bdev_lvol_inflate",
00:07:16.384    "bdev_lvol_rename",
00:07:16.384    "bdev_lvol_clone_bdev",
00:07:16.384    "bdev_lvol_clone",
00:07:16.384    "bdev_lvol_snapshot",
00:07:16.384    "bdev_lvol_create",
00:07:16.384    "bdev_lvol_delete_lvstore",
00:07:16.384    "bdev_lvol_rename_lvstore",
00:07:16.384    "bdev_lvol_create_lvstore",
00:07:16.384    "bdev_raid_set_options",
00:07:16.384    "bdev_raid_remove_base_bdev",
00:07:16.384    "bdev_raid_add_base_bdev",
00:07:16.384    "bdev_raid_delete",
00:07:16.384    "bdev_raid_create",
00:07:16.384    "bdev_raid_get_bdevs",
00:07:16.384    "bdev_error_inject_error",
00:07:16.384    "bdev_error_delete",
00:07:16.384    "bdev_error_create",
00:07:16.384    "bdev_split_delete",
00:07:16.384    "bdev_split_create",
00:07:16.384    "bdev_delay_delete",
00:07:16.384    "bdev_delay_create",
00:07:16.384    "bdev_delay_update_latency",
00:07:16.384    "bdev_zone_block_delete",
00:07:16.384    "bdev_zone_block_create",
00:07:16.384    "blobfs_create",
00:07:16.384    "blobfs_detect",
00:07:16.384    "blobfs_set_cache_size",
00:07:16.384    "bdev_ocf_flush_status",
00:07:16.384    "bdev_ocf_flush_start",
00:07:16.384    "bdev_ocf_set_seqcutoff",
00:07:16.384    "bdev_ocf_set_cache_mode",
00:07:16.384    "bdev_ocf_get_bdevs",
00:07:16.384    "bdev_ocf_reset_stats",
00:07:16.384    "bdev_ocf_get_stats",
00:07:16.384    "bdev_ocf_delete",
00:07:16.384    "bdev_ocf_create",
00:07:16.384    "bdev_aio_delete",
00:07:16.384    "bdev_aio_rescan",
00:07:16.384    "bdev_aio_create",
00:07:16.384    "bdev_ftl_set_property",
00:07:16.384    "bdev_ftl_get_properties",
00:07:16.384    "bdev_ftl_get_stats",
00:07:16.384    "bdev_ftl_unmap",
00:07:16.384    "bdev_ftl_unload",
00:07:16.384    "bdev_ftl_delete",
00:07:16.384    "bdev_ftl_load",
00:07:16.384    "bdev_ftl_create",
00:07:16.384    "bdev_virtio_attach_controller",
00:07:16.384    "bdev_virtio_scsi_get_devices",
00:07:16.384    "bdev_virtio_detach_controller",
00:07:16.384    "bdev_virtio_blk_set_hotplug",
00:07:16.384    "bdev_iscsi_delete",
00:07:16.384    "bdev_iscsi_create",
00:07:16.384    "bdev_iscsi_set_options",
00:07:16.384    "accel_error_inject_error",
00:07:16.384    "ioat_scan_accel_module",
00:07:16.384    "dsa_scan_accel_module",
00:07:16.384    "iaa_scan_accel_module",
00:07:16.384    "iscsi_set_options",
00:07:16.384    "iscsi_get_auth_groups",
00:07:16.384    "iscsi_auth_group_remove_secret",
00:07:16.384    "iscsi_auth_group_add_secret",
00:07:16.384    "iscsi_delete_auth_group",
00:07:16.384    "iscsi_create_auth_group",
00:07:16.384    "iscsi_set_discovery_auth",
00:07:16.384    "iscsi_get_options",
00:07:16.384    "iscsi_target_node_request_logout",
00:07:16.384    "iscsi_target_node_set_redirect",
00:07:16.384    "iscsi_target_node_set_auth",
00:07:16.384    "iscsi_target_node_add_lun",
00:07:16.384    "iscsi_get_connections",
00:07:16.384    "iscsi_portal_group_set_auth",
00:07:16.384    "iscsi_start_portal_group",
00:07:16.384    "iscsi_delete_portal_group",
00:07:16.384    "iscsi_create_portal_group",
00:07:16.384    "iscsi_get_portal_groups",
00:07:16.384    "iscsi_delete_target_node",
00:07:16.384    "iscsi_target_node_remove_pg_ig_maps",
00:07:16.384    "iscsi_target_node_add_pg_ig_maps",
00:07:16.384    "iscsi_create_target_node",
00:07:16.384    "iscsi_get_target_nodes",
00:07:16.384    "iscsi_delete_initiator_group",
00:07:16.384    "iscsi_initiator_group_remove_initiators",
00:07:16.384    "iscsi_initiator_group_add_initiators",
00:07:16.384    "iscsi_create_initiator_group",
00:07:16.384    "iscsi_get_initiator_groups",
00:07:16.384    "nvmf_set_crdt",
00:07:16.384    "nvmf_set_config",
00:07:16.384    "nvmf_set_max_subsystems",
00:07:16.384    "nvmf_subsystem_get_listeners",
00:07:16.384    "nvmf_subsystem_get_qpairs",
00:07:16.384    "nvmf_subsystem_get_controllers",
00:07:16.384    "nvmf_get_stats",
00:07:16.384    "nvmf_get_transports",
00:07:16.384    "nvmf_create_transport",
00:07:16.384    "nvmf_get_targets",
00:07:16.384    "nvmf_delete_target",
00:07:16.384    "nvmf_create_target",
00:07:16.384    "nvmf_subsystem_allow_any_host",
00:07:16.384    "nvmf_subsystem_remove_host",
00:07:16.384    "nvmf_subsystem_add_host",
00:07:16.384    "nvmf_subsystem_remove_ns",
00:07:16.384    "nvmf_subsystem_add_ns",
00:07:16.384    "nvmf_subsystem_listener_set_ana_state",
00:07:16.384    "nvmf_discovery_get_referrals",
00:07:16.384    "nvmf_discovery_remove_referral",
00:07:16.384    "nvmf_discovery_add_referral",
00:07:16.384    "nvmf_subsystem_remove_listener",
00:07:16.384    "nvmf_subsystem_add_listener",
00:07:16.384    "nvmf_delete_subsystem",
00:07:16.384    "nvmf_create_subsystem",
00:07:16.384    "nvmf_get_subsystems",
00:07:16.384    "env_dpdk_get_mem_stats",
00:07:16.384    "nbd_get_disks",
00:07:16.384    "nbd_stop_disk",
00:07:16.385    "nbd_start_disk",
00:07:16.385    "ublk_recover_disk",
00:07:16.385    "ublk_get_disks",
00:07:16.385    "ublk_stop_disk",
00:07:16.385    "ublk_start_disk",
00:07:16.385    "ublk_destroy_target",
00:07:16.385    "ublk_create_target",
00:07:16.385    "virtio_blk_create_transport",
00:07:16.385    "virtio_blk_get_transports",
00:07:16.385    "vhost_controller_set_coalescing",
00:07:16.385    "vhost_get_controllers",
00:07:16.385    "vhost_delete_controller",
00:07:16.385    "vhost_create_blk_controller",
00:07:16.385    "vhost_scsi_controller_remove_target",
00:07:16.385    "vhost_scsi_controller_add_target",
00:07:16.385    "vhost_start_scsi_controller",
00:07:16.385    "vhost_create_scsi_controller",
00:07:16.385    "thread_set_cpumask",
00:07:16.385    "framework_get_scheduler",
00:07:16.385    "framework_set_scheduler",
00:07:16.385    "framework_get_reactors",
00:07:16.385    "thread_get_io_channels",
00:07:16.385    "thread_get_pollers",
00:07:16.385    "thread_get_stats",
00:07:16.385    "framework_monitor_context_switch",
00:07:16.385    "spdk_kill_instance",
00:07:16.385    "log_enable_timestamps",
00:07:16.385    "log_get_flags",
00:07:16.385    "log_clear_flag",
00:07:16.385    "log_set_flag",
00:07:16.385    "log_get_level",
00:07:16.385    "log_set_level",
00:07:16.385    "log_get_print_level",
00:07:16.385    "log_set_print_level",
00:07:16.385    "framework_enable_cpumask_locks",
00:07:16.385    "framework_disable_cpumask_locks",
00:07:16.385    "framework_wait_init",
00:07:16.385    "framework_start_init",
00:07:16.385    "scsi_get_devices",
00:07:16.385    "bdev_get_histogram",
00:07:16.385    "bdev_enable_histogram",
00:07:16.385    "bdev_set_qos_limit",
00:07:16.385    "bdev_set_qd_sampling_period",
00:07:16.385    "bdev_get_bdevs",
00:07:16.385    "bdev_reset_iostat",
00:07:16.385    "bdev_get_iostat",
00:07:16.385    "bdev_examine",
00:07:16.385    "bdev_wait_for_examine",
00:07:16.385    "bdev_set_options",
00:07:16.385    "notify_get_notifications",
00:07:16.385    "notify_get_types",
00:07:16.385    "accel_get_stats",
00:07:16.385    "accel_set_options",
00:07:16.385    "accel_set_driver",
00:07:16.385    "accel_crypto_key_destroy",
00:07:16.385    "accel_crypto_keys_get",
00:07:16.385    "accel_crypto_key_create",
00:07:16.385    "accel_assign_opc",
00:07:16.385    "accel_get_module_info",
00:07:16.385    "accel_get_opc_assignments",
00:07:16.385    "vmd_rescan",
00:07:16.385    "vmd_remove_device",
00:07:16.385    "vmd_enable",
00:07:16.385    "sock_set_default_impl",
00:07:16.385    "sock_impl_set_options",
00:07:16.385    "sock_impl_get_options",
00:07:16.385    "iobuf_get_stats",
00:07:16.385    "iobuf_set_options",
00:07:16.385    "framework_get_pci_devices",
00:07:16.385    "framework_get_config",
00:07:16.385    "framework_get_subsystems",
00:07:16.385    "trace_get_info",
00:07:16.385    "trace_get_tpoint_group_mask",
00:07:16.385    "trace_disable_tpoint_group",
00:07:16.385    "trace_enable_tpoint_group",
00:07:16.385    "trace_clear_tpoint_mask",
00:07:16.385    "trace_set_tpoint_mask",
00:07:16.385    "spdk_get_version",
00:07:16.385    "rpc_get_methods"
00:07:16.385  ]
00:07:16.385   00:38:05	-- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:07:16.385   00:38:05	-- common/autotest_common.sh@728 -- # xtrace_disable
00:07:16.385   00:38:05	-- common/autotest_common.sh@10 -- # set +x
00:07:16.385   00:38:05	-- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:07:16.385   00:38:05	-- spdkcli/tcp.sh@38 -- # killprocess 941381
00:07:16.385   00:38:05	-- common/autotest_common.sh@936 -- # '[' -z 941381 ']'
00:07:16.385   00:38:05	-- common/autotest_common.sh@940 -- # kill -0 941381
00:07:16.385    00:38:05	-- common/autotest_common.sh@941 -- # uname
00:07:16.385   00:38:05	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:16.385    00:38:05	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 941381
00:07:16.644   00:38:05	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:16.644   00:38:05	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:16.644   00:38:05	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 941381'
00:07:16.644  killing process with pid 941381
00:07:16.644   00:38:05	-- common/autotest_common.sh@955 -- # kill 941381
00:07:16.644   00:38:05	-- common/autotest_common.sh@960 -- # wait 941381
00:07:17.213  
00:07:17.213  real	0m2.070s
00:07:17.213  user	0m3.810s
00:07:17.213  sys	0m0.683s
00:07:17.213   00:38:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:17.213   00:38:06	-- common/autotest_common.sh@10 -- # set +x
00:07:17.213  ************************************
00:07:17.213  END TEST spdkcli_tcp
00:07:17.213  ************************************
00:07:17.213   00:38:06	-- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:17.213   00:38:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:17.213   00:38:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:17.213   00:38:06	-- common/autotest_common.sh@10 -- # set +x
00:07:17.213  ************************************
00:07:17.213  START TEST dpdk_mem_utility
00:07:17.213  ************************************
00:07:17.213   00:38:06	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:17.213  * Looking for test storage...
00:07:17.213  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility
00:07:17.213    00:38:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:17.213     00:38:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:17.213     00:38:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:17.213    00:38:06	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:17.213    00:38:06	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:17.213    00:38:06	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:17.213    00:38:06	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:17.213    00:38:06	-- scripts/common.sh@335 -- # IFS=.-:
00:07:17.213    00:38:06	-- scripts/common.sh@335 -- # read -ra ver1
00:07:17.213    00:38:06	-- scripts/common.sh@336 -- # IFS=.-:
00:07:17.213    00:38:06	-- scripts/common.sh@336 -- # read -ra ver2
00:07:17.213    00:38:06	-- scripts/common.sh@337 -- # local 'op=<'
00:07:17.213    00:38:06	-- scripts/common.sh@339 -- # ver1_l=2
00:07:17.213    00:38:06	-- scripts/common.sh@340 -- # ver2_l=1
00:07:17.213    00:38:06	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:17.213    00:38:06	-- scripts/common.sh@343 -- # case "$op" in
00:07:17.213    00:38:06	-- scripts/common.sh@344 -- # : 1
00:07:17.213    00:38:06	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:17.213    00:38:06	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:17.213     00:38:06	-- scripts/common.sh@364 -- # decimal 1
00:07:17.213     00:38:06	-- scripts/common.sh@352 -- # local d=1
00:07:17.213     00:38:06	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:17.213     00:38:06	-- scripts/common.sh@354 -- # echo 1
00:07:17.213    00:38:06	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:17.213     00:38:06	-- scripts/common.sh@365 -- # decimal 2
00:07:17.213     00:38:06	-- scripts/common.sh@352 -- # local d=2
00:07:17.213     00:38:06	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:17.213     00:38:06	-- scripts/common.sh@354 -- # echo 2
00:07:17.213    00:38:06	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:17.213    00:38:06	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:17.213    00:38:06	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:17.213    00:38:06	-- scripts/common.sh@367 -- # return 0
00:07:17.213    00:38:06	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:17.213    00:38:06	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:17.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.213  		--rc genhtml_branch_coverage=1
00:07:17.213  		--rc genhtml_function_coverage=1
00:07:17.213  		--rc genhtml_legend=1
00:07:17.213  		--rc geninfo_all_blocks=1
00:07:17.213  		--rc geninfo_unexecuted_blocks=1
00:07:17.213  		
00:07:17.213  		'
00:07:17.213    00:38:06	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:17.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.213  		--rc genhtml_branch_coverage=1
00:07:17.213  		--rc genhtml_function_coverage=1
00:07:17.213  		--rc genhtml_legend=1
00:07:17.213  		--rc geninfo_all_blocks=1
00:07:17.213  		--rc geninfo_unexecuted_blocks=1
00:07:17.213  		
00:07:17.213  		'
00:07:17.213    00:38:06	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:17.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.213  		--rc genhtml_branch_coverage=1
00:07:17.213  		--rc genhtml_function_coverage=1
00:07:17.213  		--rc genhtml_legend=1
00:07:17.213  		--rc geninfo_all_blocks=1
00:07:17.213  		--rc geninfo_unexecuted_blocks=1
00:07:17.213  		
00:07:17.213  		'
00:07:17.213    00:38:06	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:17.213  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:17.213  		--rc genhtml_branch_coverage=1
00:07:17.213  		--rc genhtml_function_coverage=1
00:07:17.213  		--rc genhtml_legend=1
00:07:17.213  		--rc geninfo_all_blocks=1
00:07:17.213  		--rc geninfo_unexecuted_blocks=1
00:07:17.213  		
00:07:17.213  		'
00:07:17.213   00:38:06	-- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:07:17.213   00:38:06	-- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=941722
00:07:17.213   00:38:06	-- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 941722
00:07:17.213   00:38:06	-- common/autotest_common.sh@829 -- # '[' -z 941722 ']'
00:07:17.213   00:38:06	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:17.213   00:38:06	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:17.213   00:38:06	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:17.213  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:17.213   00:38:06	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:17.213   00:38:06	-- common/autotest_common.sh@10 -- # set +x
00:07:17.213   00:38:06	-- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:07:17.473  [2024-12-17 00:38:06.478399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:17.473  [2024-12-17 00:38:06.478470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941722 ]
00:07:17.473  EAL: No free 2048 kB hugepages reported on node 1
00:07:17.473  [2024-12-17 00:38:06.584381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:17.473  [2024-12-17 00:38:06.632246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:17.473  [2024-12-17 00:38:06.632404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:17.732  [2024-12-17 00:38:06.809848] 'OCF_Core' volume operations registered
00:07:17.732  [2024-12-17 00:38:06.812286] 'OCF_Cache' volume operations registered
00:07:17.732  [2024-12-17 00:38:06.815204] 'OCF Composite' volume operations registered
00:07:17.732  [2024-12-17 00:38:06.817644] 'SPDK_block_device' volume operations registered
00:07:18.301   00:38:07	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:18.301   00:38:07	-- common/autotest_common.sh@862 -- # return 0
00:07:18.301   00:38:07	-- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:18.301   00:38:07	-- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:18.301   00:38:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:18.301   00:38:07	-- common/autotest_common.sh@10 -- # set +x
00:07:18.301  {
00:07:18.301  "filename": "/tmp/spdk_mem_dump.txt"
00:07:18.301  }
00:07:18.301   00:38:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:18.301   00:38:07	-- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:07:18.301  DPDK memory size 1198.000000 MiB in 1 heap(s)
00:07:18.301  1 heaps totaling size 1198.000000 MiB
00:07:18.301    size: 1198.000000 MiB heap id: 0
00:07:18.301  end heaps----------
00:07:18.301  26 mempools totaling size 954.459290 MiB
00:07:18.301    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:18.301    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:18.301    size:   84.521057 MiB name: bdev_io_941722
00:07:18.301    size:   76.286926 MiB name: ocf_env_12:ocf_mio_8
00:07:18.301    size:   60.174072 MiB name: ocf_env_8:ocf_req_128
00:07:18.301    size:   51.011292 MiB name: evtpool_941722
00:07:18.301    size:   50.003479 MiB name: msgpool_941722
00:07:18.301    size:   40.142639 MiB name: ocf_env_11:ocf_mio_4
00:07:18.301    size:   34.164612 MiB name: ocf_env_7:ocf_req_64
00:07:18.301    size:   22.138245 MiB name: ocf_env_6:ocf_req_32
00:07:18.301    size:   22.138245 MiB name: ocf_env_10:ocf_mio_2
00:07:18.301    size:   21.763794 MiB name: PDU_Pool
00:07:18.301    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:18.301    size:   16.136780 MiB name: ocf_env_5:ocf_req_16
00:07:18.301    size:   14.136292 MiB name: ocf_env_4:ocf_req_8
00:07:18.301    size:   14.136292 MiB name: ocf_env_9:ocf_mio_1
00:07:18.301    size:   12.136414 MiB name: ocf_env_3:ocf_req_4
00:07:18.301    size:   10.135315 MiB name: ocf_env_1:ocf_req_1
00:07:18.301    size:   10.135315 MiB name: ocf_env_2:ocf_req_2
00:07:18.301    size:    8.133545 MiB name: ocf_env_17:OCF Composit
00:07:18.301    size:    6.133728 MiB name: ocf_env_16:OCF_Cache
00:07:18.301    size:    6.133728 MiB name: ocf_env_18:SPDK_block_d
00:07:18.301    size:    1.609375 MiB name: ocf_env_15:ocf_mio_64
00:07:18.301    size:    1.310547 MiB name: ocf_env_14:ocf_mio_32
00:07:18.301    size:    1.161133 MiB name: ocf_env_13:ocf_mio_16
00:07:18.301    size:    0.026123 MiB name: Session_Pool
00:07:18.301  end mempools-------
00:07:18.301  6 memzones totaling size 4.142822 MiB
00:07:18.301    size:    1.000366 MiB name: RG_ring_0_941722
00:07:18.301    size:    1.000366 MiB name: RG_ring_1_941722
00:07:18.301    size:    1.000366 MiB name: RG_ring_4_941722
00:07:18.301    size:    1.000366 MiB name: RG_ring_5_941722
00:07:18.301    size:    0.125366 MiB name: RG_ring_2_941722
00:07:18.301    size:    0.015991 MiB name: RG_ring_3_941722
00:07:18.301  end memzones-------
00:07:18.301   00:38:07	-- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:07:18.563  heap id: 0 total size: 1198.000000 MiB number of busy elements: 120 number of free elements: 47
00:07:18.563    list of free elements. size: 40.154602 MiB
00:07:18.563      element at address: 0x200030800000 with size:    0.999878 MiB
00:07:18.563      element at address: 0x200030200000 with size:    0.999329 MiB
00:07:18.563      element at address: 0x200030c00000 with size:    0.999329 MiB
00:07:18.563      element at address: 0x20002f800000 with size:    0.998962 MiB
00:07:18.563      element at address: 0x20002f000000 with size:    0.998779 MiB
00:07:18.563      element at address: 0x200018e00000 with size:    0.998718 MiB
00:07:18.563      element at address: 0x200019000000 with size:    0.997375 MiB
00:07:18.563      element at address: 0x200019a00000 with size:    0.997375 MiB
00:07:18.563      element at address: 0x20001b000000 with size:    0.996399 MiB
00:07:18.563      element at address: 0x200024a00000 with size:    0.996399 MiB
00:07:18.563      element at address: 0x200003e00000 with size:    0.996277 MiB
00:07:18.563      element at address: 0x20001a400000 with size:    0.996277 MiB
00:07:18.563      element at address: 0x20001be00000 with size:    0.995911 MiB
00:07:18.563      element at address: 0x20001d000000 with size:    0.994446 MiB
00:07:18.563      element at address: 0x200025a00000 with size:    0.994446 MiB
00:07:18.563      element at address: 0x200049c00000 with size:    0.994446 MiB
00:07:18.563      element at address: 0x200027200000 with size:    0.990051 MiB
00:07:18.563      element at address: 0x20001e800000 with size:    0.968079 MiB
00:07:18.563      element at address: 0x20003fa00000 with size:    0.959961 MiB
00:07:18.563      element at address: 0x200020c00000 with size:    0.958374 MiB
00:07:18.563      element at address: 0x200030a00000 with size:    0.936584 MiB
00:07:18.563      element at address: 0x20001ce00000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x20001e600000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200020a00000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200024800000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200025800000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200027000000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200029a00000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x20002ee00000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x20002f600000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200030000000 with size:    0.866211 MiB
00:07:18.563      element at address: 0x200007000000 with size:    0.866089 MiB
00:07:18.563      element at address: 0x20000b200000 with size:    0.866089 MiB
00:07:18.563      element at address: 0x200000400000 with size:    0.865723 MiB
00:07:18.563      element at address: 0x200000800000 with size:    0.863159 MiB
00:07:18.563      element at address: 0x200029c00000 with size:    0.845764 MiB
00:07:18.563      element at address: 0x200013800000 with size:    0.845581 MiB
00:07:18.563      element at address: 0x200000200000 with size:    0.841614 MiB
00:07:18.563      element at address: 0x20002e800000 with size:    0.837769 MiB
00:07:18.563      element at address: 0x20002ea00000 with size:    0.688354 MiB
00:07:18.563      element at address: 0x200032600000 with size:    0.582886 MiB
00:07:18.563      element at address: 0x200030e00000 with size:    0.490845 MiB
00:07:18.563      element at address: 0x200049a00000 with size:    0.490845 MiB
00:07:18.563      element at address: 0x200031000000 with size:    0.485657 MiB
00:07:18.563      element at address: 0x20003fc00000 with size:    0.410034 MiB
00:07:18.563      element at address: 0x20002ec00000 with size:    0.389160 MiB
00:07:18.563      element at address: 0x200003a00000 with size:    0.355530 MiB
00:07:18.563    list of standard malloc elements. size: 199.233032 MiB
00:07:18.563      element at address: 0x20000b3fff80 with size:  132.000122 MiB
00:07:18.563      element at address: 0x2000071fff80 with size:   64.000122 MiB
00:07:18.563      element at address: 0x200018efff80 with size:    1.000122 MiB
00:07:18.563      element at address: 0x2000308fff80 with size:    1.000122 MiB
00:07:18.563      element at address: 0x200030afff80 with size:    1.000122 MiB
00:07:18.563      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:07:18.563      element at address: 0x200030aeff00 with size:    0.062622 MiB
00:07:18.563      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:07:18.563      element at address: 0x200018effd40 with size:    0.000549 MiB
00:07:18.563      element at address: 0x200030aefdc0 with size:    0.000305 MiB
00:07:18.563      element at address: 0x200018effc40 with size:    0.000244 MiB
00:07:18.563      element at address: 0x200020cf5700 with size:    0.000244 MiB
00:07:18.563      element at address: 0x2000002d7740 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000002d7800 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000002d78c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000002d7ac0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000002d7b80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000004fdc00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000008fd180 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003a5b040 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003adb300 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003adb500 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003adf7c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003affa80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003affb40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200003eff0c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000070fdd80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20000b2fdd80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000138f8980 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200018effac0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200018effb80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000190ff540 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000190ff600 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000190ff6c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200019aff540 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200019aff600 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200019aff6c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001a4ff0c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001a4ff180 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001a4ff240 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001b0ff140 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001b0ff200 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001b0ff2c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001befef40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001beff000 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001beff0c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001cefde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001d0fe940 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001d0fea00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001d0feac0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001e6fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001e8f7d40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001e8f7e00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20001e8f7ec0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200020afde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200020cf5580 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200020cf5640 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200020cf5800 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000248fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200024aff140 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200024aff200 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200024aff2c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000258fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200025afe940 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200025afea00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200025afeac0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000270fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000272fd740 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000272fd800 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000272fd8c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200029afde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200029cd8840 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200029cd8900 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200029cd89c0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002e8d6780 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002e8d6840 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002e8d6900 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002e8fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002eab0380 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002eab0440 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002eab0500 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002eafde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002ec63a00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002ec63ac0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002ec63b80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002ec63c40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002ec63d00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002ecfde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002eefde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f0ffb00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f0ffbc0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f0ffc80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f0ffd40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f6fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f8ffbc0 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f8ffc80 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f8ffd40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x20002f8ffe00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000300fde00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x2000302ffd40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200030aefc40 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200030aefd00 with size:    0.000183 MiB
00:07:18.563      element at address: 0x200030cffd40 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200030e7da80 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200030e7db40 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200030efde00 with size:    0.000183 MiB
00:07:18.564      element at address: 0x2000310bc740 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200032695380 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200032695440 with size:    0.000183 MiB
00:07:18.564      element at address: 0x20003fafde00 with size:    0.000183 MiB
00:07:18.564      element at address: 0x20003fc68f80 with size:    0.000183 MiB
00:07:18.564      element at address: 0x20003fc69040 with size:    0.000183 MiB
00:07:18.564      element at address: 0x20003fc6fc40 with size:    0.000183 MiB
00:07:18.564      element at address: 0x20003fc6fe40 with size:    0.000183 MiB
00:07:18.564      element at address: 0x20003fc6ff00 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200049a7da80 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200049a7db40 with size:    0.000183 MiB
00:07:18.564      element at address: 0x200049afde00 with size:    0.000183 MiB
00:07:18.564    list of memzone associated elements. size: 958.612366 MiB
00:07:18.564      element at address: 0x200032695500 with size:  211.416748 MiB
00:07:18.564        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:18.564      element at address: 0x20003fc6ffc0 with size:  157.562561 MiB
00:07:18.564        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:18.564      element at address: 0x2000139fab80 with size:   84.020630 MiB
00:07:18.564        associated memzone info: size:   84.020508 MiB name: MP_bdev_io_941722_0
00:07:18.564      element at address: 0x200029cd8a80 with size:   75.153687 MiB
00:07:18.564        associated memzone info: size:   75.153564 MiB name: MP_ocf_env_12:ocf_mio_8_0
00:07:18.564      element at address: 0x200020cf58c0 with size:   59.040833 MiB
00:07:18.564        associated memzone info: size:   59.040710 MiB name: MP_ocf_env_8:ocf_req_128_0
00:07:18.564      element at address: 0x2000009ff380 with size:   48.003052 MiB
00:07:18.564        associated memzone info: size:   48.002930 MiB name: MP_evtpool_941722_0
00:07:18.564      element at address: 0x200003fff380 with size:   48.003052 MiB
00:07:18.564        associated memzone info: size:   48.002930 MiB name: MP_msgpool_941722_0
00:07:18.564      element at address: 0x2000272fd980 with size:   39.009399 MiB
00:07:18.564        associated memzone info: size:   39.009277 MiB name: MP_ocf_env_11:ocf_mio_4_0
00:07:18.564      element at address: 0x20001e8f7f80 with size:   33.031372 MiB
00:07:18.564        associated memzone info: size:   33.031250 MiB name: MP_ocf_env_7:ocf_req_64_0
00:07:18.564      element at address: 0x20001d0feb80 with size:   21.005005 MiB
00:07:18.564        associated memzone info: size:   21.004883 MiB name: MP_ocf_env_6:ocf_req_32_0
00:07:18.564      element at address: 0x200025afeb80 with size:   21.005005 MiB
00:07:18.564        associated memzone info: size:   21.004883 MiB name: MP_ocf_env_10:ocf_mio_2_0
00:07:18.564      element at address: 0x2000311be940 with size:   20.255554 MiB
00:07:18.564        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:18.564      element at address: 0x200049dfeb40 with size:   18.005066 MiB
00:07:18.564        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:18.564      element at address: 0x20001beff180 with size:   15.003540 MiB
00:07:18.564        associated memzone info: size:   15.003418 MiB name: MP_ocf_env_5:ocf_req_16_0
00:07:18.564      element at address: 0x20001b0ff380 with size:   13.003052 MiB
00:07:18.564        associated memzone info: size:   13.002930 MiB name: MP_ocf_env_4:ocf_req_8_0
00:07:18.564      element at address: 0x200024aff380 with size:   13.003052 MiB
00:07:18.564        associated memzone info: size:   13.002930 MiB name: MP_ocf_env_9:ocf_mio_1_0
00:07:18.564      element at address: 0x20001a4ff300 with size:   11.003174 MiB
00:07:18.564        associated memzone info: size:   11.003052 MiB name: MP_ocf_env_3:ocf_req_4_0
00:07:18.564      element at address: 0x2000190ff780 with size:    9.002075 MiB
00:07:18.564        associated memzone info: size:    9.001953 MiB name: MP_ocf_env_1:ocf_req_1_0
00:07:18.564      element at address: 0x200019aff780 with size:    9.002075 MiB
00:07:18.564        associated memzone info: size:    9.001953 MiB name: MP_ocf_env_2:ocf_req_2_0
00:07:18.564      element at address: 0x20002f8ffec0 with size:    7.000305 MiB
00:07:18.564        associated memzone info: size:    7.000183 MiB name: MP_ocf_env_17:OCF Composit_0
00:07:18.564      element at address: 0x20002f0ffe00 with size:    5.000488 MiB
00:07:18.564        associated memzone info: size:    5.000366 MiB name: MP_ocf_env_16:OCF_Cache_0
00:07:18.564      element at address: 0x2000302ffe00 with size:    5.000488 MiB
00:07:18.564        associated memzone info: size:    5.000366 MiB name: MP_ocf_env_18:SPDK_block_d_0
00:07:18.564      element at address: 0x2000005ffe00 with size:    2.000488 MiB
00:07:18.564        associated memzone info: size:    2.000366 MiB name: RG_MP_evtpool_941722
00:07:18.564      element at address: 0x200003bffe00 with size:    2.000488 MiB
00:07:18.564        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_941722
00:07:18.564      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_evtpool_941722
00:07:18.564      element at address: 0x2000138f8a40 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_1:ocf_req_1
00:07:18.564      element at address: 0x20000b2fde40 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_2:ocf_req_2
00:07:18.564      element at address: 0x2000070fde40 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_3:ocf_req_4
00:07:18.564      element at address: 0x2000008fd240 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_4:ocf_req_8
00:07:18.564      element at address: 0x2000004fdcc0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_5:ocf_req_16
00:07:18.564      element at address: 0x20001cefdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_6:ocf_req_32
00:07:18.564      element at address: 0x20001e6fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_7:ocf_req_64
00:07:18.564      element at address: 0x200020afdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_8:ocf_req_128
00:07:18.564      element at address: 0x2000248fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_9:ocf_mio_1
00:07:18.564      element at address: 0x2000258fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_10:ocf_mio_2
00:07:18.564      element at address: 0x2000270fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_11:ocf_mio_4
00:07:18.564      element at address: 0x200029afdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_12:ocf_mio_8
00:07:18.564      element at address: 0x20002e8fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_13:ocf_mio_16
00:07:18.564      element at address: 0x20002eafdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_14:ocf_mio_32
00:07:18.564      element at address: 0x20002ecfdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_15:ocf_mio_64
00:07:18.564      element at address: 0x20002eefdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_16:OCF_Cache
00:07:18.564      element at address: 0x20002f6fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_17:OCF Composit
00:07:18.564      element at address: 0x2000300fdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_18:SPDK_block_d
00:07:18.564      element at address: 0x200030efdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:18.564      element at address: 0x2000310bc800 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:18.564      element at address: 0x20003fafdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:18.564      element at address: 0x200049afdec0 with size:    1.008118 MiB
00:07:18.564        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:18.564      element at address: 0x200003eff180 with size:    1.000488 MiB
00:07:18.564        associated memzone info: size:    1.000366 MiB name: RG_ring_0_941722
00:07:18.564      element at address: 0x200003affc00 with size:    1.000488 MiB
00:07:18.564        associated memzone info: size:    1.000366 MiB name: RG_ring_1_941722
00:07:18.564      element at address: 0x200030cffe00 with size:    1.000488 MiB
00:07:18.564        associated memzone info: size:    1.000366 MiB name: RG_ring_4_941722
00:07:18.564      element at address: 0x200049cfe940 with size:    1.000488 MiB
00:07:18.564        associated memzone info: size:    1.000366 MiB name: RG_ring_5_941722
00:07:18.564      element at address: 0x20002ec63dc0 with size:    0.600891 MiB
00:07:18.564        associated memzone info: size:    0.600769 MiB name: MP_ocf_env_15:ocf_mio_64_0
00:07:18.564      element at address: 0x200003a5b100 with size:    0.500488 MiB
00:07:18.564        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_941722
00:07:18.564      element at address: 0x200030e7dc00 with size:    0.500488 MiB
00:07:18.564        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:18.564      element at address: 0x200049a7dc00 with size:    0.500488 MiB
00:07:18.564        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:18.564      element at address: 0x20002eab05c0 with size:    0.302063 MiB
00:07:18.564        associated memzone info: size:    0.301941 MiB name: MP_ocf_env_14:ocf_mio_32_0
00:07:18.564      element at address: 0x20003107c540 with size:    0.250488 MiB
00:07:18.564        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:18.564      element at address: 0x20002e8d69c0 with size:    0.152649 MiB
00:07:18.564        associated memzone info: size:    0.152527 MiB name: MP_ocf_env_13:ocf_mio_16_0
00:07:18.564      element at address: 0x200003adf880 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_ring_2_941722
00:07:18.564      element at address: 0x2000138d8780 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_1:ocf_req_1
00:07:18.564      element at address: 0x20000b2ddb80 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_2:ocf_req_2
00:07:18.564      element at address: 0x2000070ddb80 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_3:ocf_req_4
00:07:18.564      element at address: 0x2000008dcf80 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_4:ocf_req_8
00:07:18.564      element at address: 0x2000004dda00 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_5:ocf_req_16
00:07:18.564      element at address: 0x20001ceddc00 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_6:ocf_req_32
00:07:18.564      element at address: 0x20001e6ddc00 with size:    0.125488 MiB
00:07:18.564        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_7:ocf_req_64
00:07:18.565      element at address: 0x200020addc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_8:ocf_req_128
00:07:18.565      element at address: 0x2000248ddc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_9:ocf_mio_1
00:07:18.565      element at address: 0x2000258ddc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_10:ocf_mio_2
00:07:18.565      element at address: 0x2000270ddc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_11:ocf_mio_4
00:07:18.565      element at address: 0x200029addc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_12:ocf_mio_8
00:07:18.565      element at address: 0x20002eeddc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_16:OCF_Cache
00:07:18.565      element at address: 0x20002f6ddc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_17:OCF Composit
00:07:18.565      element at address: 0x2000300ddc00 with size:    0.125488 MiB
00:07:18.565        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_18:SPDK_block_d
00:07:18.565      element at address: 0x20003faf5c00 with size:    0.031738 MiB
00:07:18.565        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:18.565      element at address: 0x20003fc69100 with size:    0.023743 MiB
00:07:18.565        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:18.565      element at address: 0x200003adb5c0 with size:    0.016113 MiB
00:07:18.565        associated memzone info: size:    0.015991 MiB name: RG_ring_3_941722
00:07:18.565      element at address: 0x20003fc6f240 with size:    0.002441 MiB
00:07:18.565        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:18.565      element at address: 0x20002e8fdb00 with size:    0.000732 MiB
00:07:18.565        associated memzone info: size:    0.000610 MiB name: RG_MP_ocf_env_13:ocf_mio_16
00:07:18.565      element at address: 0x20002eafdb00 with size:    0.000732 MiB
00:07:18.565        associated memzone info: size:    0.000610 MiB name: RG_MP_ocf_env_14:ocf_mio_32
00:07:18.565      element at address: 0x20002ecfdb00 with size:    0.000732 MiB
00:07:18.565        associated memzone info: size:    0.000610 MiB name: RG_MP_ocf_env_15:ocf_mio_64
00:07:18.565      element at address: 0x2000002d7980 with size:    0.000305 MiB
00:07:18.565        associated memzone info: size:    0.000183 MiB name: MP_msgpool_941722
00:07:18.565      element at address: 0x200003adb3c0 with size:    0.000305 MiB
00:07:18.565        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_941722
00:07:18.565      element at address: 0x20003fc6fd00 with size:    0.000305 MiB
00:07:18.565        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:18.565   00:38:07	-- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:18.565   00:38:07	-- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 941722
00:07:18.565   00:38:07	-- common/autotest_common.sh@936 -- # '[' -z 941722 ']'
00:07:18.565   00:38:07	-- common/autotest_common.sh@940 -- # kill -0 941722
00:07:18.565    00:38:07	-- common/autotest_common.sh@941 -- # uname
00:07:18.565   00:38:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:18.565    00:38:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 941722
00:07:18.565   00:38:07	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:18.565   00:38:07	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:18.565   00:38:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 941722'
00:07:18.565  killing process with pid 941722
00:07:18.565   00:38:07	-- common/autotest_common.sh@955 -- # kill 941722
00:07:18.565   00:38:07	-- common/autotest_common.sh@960 -- # wait 941722
00:07:19.134  
00:07:19.134  real	0m1.904s
00:07:19.134  user	0m1.928s
00:07:19.134  sys	0m0.636s
00:07:19.134   00:38:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:19.134   00:38:08	-- common/autotest_common.sh@10 -- # set +x
00:07:19.134  ************************************
00:07:19.134  END TEST dpdk_mem_utility
00:07:19.134  ************************************
00:07:19.134   00:38:08	-- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh
00:07:19.134   00:38:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:19.134   00:38:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:19.134   00:38:08	-- common/autotest_common.sh@10 -- # set +x
00:07:19.134  ************************************
00:07:19.134  START TEST event
00:07:19.134  ************************************
00:07:19.134   00:38:08	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh
00:07:19.134  * Looking for test storage...
00:07:19.134  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event
00:07:19.134    00:38:08	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:19.134     00:38:08	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:19.134     00:38:08	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:19.134    00:38:08	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:19.134    00:38:08	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:19.134    00:38:08	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:19.134    00:38:08	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:19.134    00:38:08	-- scripts/common.sh@335 -- # IFS=.-:
00:07:19.134    00:38:08	-- scripts/common.sh@335 -- # read -ra ver1
00:07:19.134    00:38:08	-- scripts/common.sh@336 -- # IFS=.-:
00:07:19.134    00:38:08	-- scripts/common.sh@336 -- # read -ra ver2
00:07:19.134    00:38:08	-- scripts/common.sh@337 -- # local 'op=<'
00:07:19.134    00:38:08	-- scripts/common.sh@339 -- # ver1_l=2
00:07:19.134    00:38:08	-- scripts/common.sh@340 -- # ver2_l=1
00:07:19.134    00:38:08	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:19.134    00:38:08	-- scripts/common.sh@343 -- # case "$op" in
00:07:19.134    00:38:08	-- scripts/common.sh@344 -- # : 1
00:07:19.134    00:38:08	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:19.134    00:38:08	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:19.134     00:38:08	-- scripts/common.sh@364 -- # decimal 1
00:07:19.134     00:38:08	-- scripts/common.sh@352 -- # local d=1
00:07:19.134     00:38:08	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:19.134     00:38:08	-- scripts/common.sh@354 -- # echo 1
00:07:19.134    00:38:08	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:19.134     00:38:08	-- scripts/common.sh@365 -- # decimal 2
00:07:19.134     00:38:08	-- scripts/common.sh@352 -- # local d=2
00:07:19.134     00:38:08	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:19.134     00:38:08	-- scripts/common.sh@354 -- # echo 2
00:07:19.134    00:38:08	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:19.134    00:38:08	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:19.134    00:38:08	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:19.134    00:38:08	-- scripts/common.sh@367 -- # return 0
00:07:19.134    00:38:08	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:19.134    00:38:08	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:19.134  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:19.134  		--rc genhtml_branch_coverage=1
00:07:19.134  		--rc genhtml_function_coverage=1
00:07:19.134  		--rc genhtml_legend=1
00:07:19.134  		--rc geninfo_all_blocks=1
00:07:19.134  		--rc geninfo_unexecuted_blocks=1
00:07:19.134  		
00:07:19.134  		'
00:07:19.134    00:38:08	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:19.134  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:19.134  		--rc genhtml_branch_coverage=1
00:07:19.134  		--rc genhtml_function_coverage=1
00:07:19.134  		--rc genhtml_legend=1
00:07:19.134  		--rc geninfo_all_blocks=1
00:07:19.134  		--rc geninfo_unexecuted_blocks=1
00:07:19.134  		
00:07:19.134  		'
00:07:19.134    00:38:08	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:19.134  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:19.134  		--rc genhtml_branch_coverage=1
00:07:19.134  		--rc genhtml_function_coverage=1
00:07:19.134  		--rc genhtml_legend=1
00:07:19.134  		--rc geninfo_all_blocks=1
00:07:19.134  		--rc geninfo_unexecuted_blocks=1
00:07:19.134  		
00:07:19.134  		'
00:07:19.134    00:38:08	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:19.134  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:19.134  		--rc genhtml_branch_coverage=1
00:07:19.134  		--rc genhtml_function_coverage=1
00:07:19.134  		--rc genhtml_legend=1
00:07:19.134  		--rc geninfo_all_blocks=1
00:07:19.134  		--rc geninfo_unexecuted_blocks=1
00:07:19.134  		
00:07:19.134  		'
00:07:19.134   00:38:08	-- event/event.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh
00:07:19.134    00:38:08	-- bdev/nbd_common.sh@6 -- # set -e
00:07:19.134   00:38:08	-- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:19.134   00:38:08	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:07:19.134   00:38:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:19.134   00:38:08	-- common/autotest_common.sh@10 -- # set +x
00:07:19.134  ************************************
00:07:19.134  START TEST event_perf
00:07:19.134  ************************************
00:07:19.134   00:38:08	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:19.394  Running I/O for 1 seconds...[2024-12-17 00:38:08.406137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:19.394  [2024-12-17 00:38:08.406226] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942096 ]
00:07:19.394  EAL: No free 2048 kB hugepages reported on node 1
00:07:19.394  [2024-12-17 00:38:08.511553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:19.394  [2024-12-17 00:38:08.565583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:19.394  [2024-12-17 00:38:08.565684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:19.394  [2024-12-17 00:38:08.565785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:19.394  [2024-12-17 00:38:08.565785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:20.772  Running I/O for 1 seconds...
00:07:20.772  lcore  0:   168186
00:07:20.772  lcore  1:   168186
00:07:20.772  lcore  2:   168186
00:07:20.772  lcore  3:   168185
00:07:20.772  done.
00:07:20.772  
00:07:20.772  real	0m1.259s
00:07:20.772  user	0m4.124s
00:07:20.772  sys	0m0.130s
00:07:20.772   00:38:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:20.772   00:38:09	-- common/autotest_common.sh@10 -- # set +x
00:07:20.772  ************************************
00:07:20.772  END TEST event_perf
00:07:20.772  ************************************
00:07:20.772   00:38:09	-- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:07:20.772   00:38:09	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:20.772   00:38:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:20.772   00:38:09	-- common/autotest_common.sh@10 -- # set +x
00:07:20.772  ************************************
00:07:20.772  START TEST event_reactor
00:07:20.772  ************************************
00:07:20.772   00:38:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:07:20.772  [2024-12-17 00:38:09.713646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:20.772  [2024-12-17 00:38:09.713736] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942268 ]
00:07:20.772  EAL: No free 2048 kB hugepages reported on node 1
00:07:20.772  [2024-12-17 00:38:09.809414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:20.772  [2024-12-17 00:38:09.859233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:21.710  test_start
00:07:21.710  oneshot
00:07:21.710  tick 100
00:07:21.710  tick 100
00:07:21.710  tick 250
00:07:21.710  tick 100
00:07:21.710  tick 100
00:07:21.710  tick 100
00:07:21.710  tick 250
00:07:21.710  tick 500
00:07:21.710  tick 100
00:07:21.710  tick 100
00:07:21.710  tick 250
00:07:21.710  tick 100
00:07:21.710  tick 100
00:07:21.710  test_end
00:07:21.710  
00:07:21.710  real	0m1.246s
00:07:21.710  user	0m1.128s
00:07:21.710  sys	0m0.111s
00:07:21.710   00:38:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:21.710   00:38:10	-- common/autotest_common.sh@10 -- # set +x
00:07:21.710  ************************************
00:07:21.710  END TEST event_reactor
00:07:21.710  ************************************
00:07:21.969   00:38:10	-- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:21.969   00:38:10	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:07:21.969   00:38:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:21.969   00:38:10	-- common/autotest_common.sh@10 -- # set +x
00:07:21.969  ************************************
00:07:21.969  START TEST event_reactor_perf
00:07:21.969  ************************************
00:07:21.969   00:38:10	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:07:21.969  [2024-12-17 00:38:11.007615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:21.969  [2024-12-17 00:38:11.007704] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942442 ]
00:07:21.969  EAL: No free 2048 kB hugepages reported on node 1
00:07:21.969  [2024-12-17 00:38:11.112635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:21.969  [2024-12-17 00:38:11.162491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.348  test_start
00:07:23.348  test_end
00:07:23.348  Performance:   323875 events per second
00:07:23.348  
00:07:23.348  real	0m1.254s
00:07:23.348  user	0m1.133s
00:07:23.348  sys	0m0.114s
00:07:23.348   00:38:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:23.348   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.348  ************************************
00:07:23.348  END TEST event_reactor_perf
00:07:23.348  ************************************
00:07:23.348    00:38:12	-- event/event.sh@49 -- # uname -s
00:07:23.348   00:38:12	-- event/event.sh@49 -- # '[' Linux = Linux ']'
00:07:23.348   00:38:12	-- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:07:23.348   00:38:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:23.348   00:38:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:23.348   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.348  ************************************
00:07:23.348  START TEST event_scheduler
00:07:23.348  ************************************
00:07:23.348   00:38:12	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:07:23.348  * Looking for test storage...
00:07:23.348  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler
00:07:23.348    00:38:12	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:23.348     00:38:12	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:23.348     00:38:12	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:23.348    00:38:12	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:23.348    00:38:12	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:23.348    00:38:12	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:23.348    00:38:12	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:23.348    00:38:12	-- scripts/common.sh@335 -- # IFS=.-:
00:07:23.348    00:38:12	-- scripts/common.sh@335 -- # read -ra ver1
00:07:23.348    00:38:12	-- scripts/common.sh@336 -- # IFS=.-:
00:07:23.348    00:38:12	-- scripts/common.sh@336 -- # read -ra ver2
00:07:23.348    00:38:12	-- scripts/common.sh@337 -- # local 'op=<'
00:07:23.348    00:38:12	-- scripts/common.sh@339 -- # ver1_l=2
00:07:23.348    00:38:12	-- scripts/common.sh@340 -- # ver2_l=1
00:07:23.348    00:38:12	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:23.348    00:38:12	-- scripts/common.sh@343 -- # case "$op" in
00:07:23.348    00:38:12	-- scripts/common.sh@344 -- # : 1
00:07:23.348    00:38:12	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:23.348    00:38:12	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:23.348     00:38:12	-- scripts/common.sh@364 -- # decimal 1
00:07:23.348     00:38:12	-- scripts/common.sh@352 -- # local d=1
00:07:23.348     00:38:12	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:23.348     00:38:12	-- scripts/common.sh@354 -- # echo 1
00:07:23.348    00:38:12	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:23.348     00:38:12	-- scripts/common.sh@365 -- # decimal 2
00:07:23.348     00:38:12	-- scripts/common.sh@352 -- # local d=2
00:07:23.348     00:38:12	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:23.348     00:38:12	-- scripts/common.sh@354 -- # echo 2
00:07:23.348    00:38:12	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:23.348    00:38:12	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:23.348    00:38:12	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:23.348    00:38:12	-- scripts/common.sh@367 -- # return 0
00:07:23.348    00:38:12	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:23.348    00:38:12	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:23.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:23.348  		--rc genhtml_branch_coverage=1
00:07:23.348  		--rc genhtml_function_coverage=1
00:07:23.348  		--rc genhtml_legend=1
00:07:23.348  		--rc geninfo_all_blocks=1
00:07:23.348  		--rc geninfo_unexecuted_blocks=1
00:07:23.348  		
00:07:23.348  		'
00:07:23.348    00:38:12	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:23.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:23.348  		--rc genhtml_branch_coverage=1
00:07:23.348  		--rc genhtml_function_coverage=1
00:07:23.348  		--rc genhtml_legend=1
00:07:23.348  		--rc geninfo_all_blocks=1
00:07:23.348  		--rc geninfo_unexecuted_blocks=1
00:07:23.348  		
00:07:23.348  		'
00:07:23.348    00:38:12	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:23.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:23.348  		--rc genhtml_branch_coverage=1
00:07:23.348  		--rc genhtml_function_coverage=1
00:07:23.348  		--rc genhtml_legend=1
00:07:23.348  		--rc geninfo_all_blocks=1
00:07:23.348  		--rc geninfo_unexecuted_blocks=1
00:07:23.348  		
00:07:23.348  		'
00:07:23.348    00:38:12	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:23.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:23.348  		--rc genhtml_branch_coverage=1
00:07:23.348  		--rc genhtml_function_coverage=1
00:07:23.348  		--rc genhtml_legend=1
00:07:23.348  		--rc geninfo_all_blocks=1
00:07:23.348  		--rc geninfo_unexecuted_blocks=1
00:07:23.348  		
00:07:23.348  		'
00:07:23.348   00:38:12	-- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:07:23.348   00:38:12	-- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:07:23.348   00:38:12	-- scheduler/scheduler.sh@35 -- # scheduler_pid=942745
00:07:23.348   00:38:12	-- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:07:23.348   00:38:12	-- scheduler/scheduler.sh@37 -- # waitforlisten 942745
00:07:23.348   00:38:12	-- common/autotest_common.sh@829 -- # '[' -z 942745 ']'
00:07:23.348   00:38:12	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:23.348   00:38:12	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:23.348   00:38:12	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:23.348  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:23.348   00:38:12	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:23.348   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.348  [2024-12-17 00:38:12.533906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:23.349  [2024-12-17 00:38:12.533979] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942745 ]
00:07:23.349  EAL: No free 2048 kB hugepages reported on node 1
00:07:23.608  [2024-12-17 00:38:12.631306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:23.608  [2024-12-17 00:38:12.686153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.608  [2024-12-17 00:38:12.686235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:23.608  [2024-12-17 00:38:12.686337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:23.608  [2024-12-17 00:38:12.686337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:23.608   00:38:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:23.608   00:38:12	-- common/autotest_common.sh@862 -- # return 0
00:07:23.608   00:38:12	-- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:07:23.608   00:38:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.608   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.608  POWER: Env isn't set yet!
00:07:23.608  POWER: Attempting to initialise ACPI cpufreq power management...
00:07:23.608  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:07:23.608  POWER: Cannot set governor of lcore 0 to userspace
00:07:23.608  POWER: Attempting to initialise PSTAT power management...
00:07:23.608  POWER: Power management governor of lcore 0 has been set to 'performance' successfully
00:07:23.608  POWER: Initialized successfully for lcore 0 power management
00:07:23.608  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:07:23.608  POWER: Initialized successfully for lcore 1 power management
00:07:23.608  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:07:23.608  POWER: Initialized successfully for lcore 2 power management
00:07:23.608  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:07:23.608  POWER: Initialized successfully for lcore 3 power management
00:07:23.608  [2024-12-17 00:38:12.786060] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:07:23.608  [2024-12-17 00:38:12.786078] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:07:23.608  [2024-12-17 00:38:12.786089] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:07:23.608   00:38:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.608   00:38:12	-- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:07:23.608   00:38:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.608   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  [2024-12-17 00:38:12.968810] 'OCF_Core' volume operations registered
00:07:23.868  [2024-12-17 00:38:12.971218] 'OCF_Cache' volume operations registered
00:07:23.868  [2024-12-17 00:38:12.974105] 'OCF Composite' volume operations registered
00:07:23.868  [2024-12-17 00:38:12.976496] 'SPDK_block_device' volume operations registered
00:07:23.868  [2024-12-17 00:38:12.977535] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:23.868   00:38:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:12	-- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:23.868   00:38:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:23.868   00:38:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:23.868   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  ************************************
00:07:23.868  START TEST scheduler_create_thread
00:07:23.868  ************************************
00:07:23.868   00:38:12	-- common/autotest_common.sh@1114 -- # scheduler_create_thread
00:07:23.868   00:38:12	-- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:23.868   00:38:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  2
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  3
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  4
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  5
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  6
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  7
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  8
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  9
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868  10
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868    00:38:13	-- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:23.868    00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868    00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868    00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:23.868   00:38:13	-- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:23.868   00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868   00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:23.868   00:38:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:23.868    00:38:13	-- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:23.868    00:38:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:23.868    00:38:13	-- common/autotest_common.sh@10 -- # set +x
00:07:25.773    00:38:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:25.773   00:38:14	-- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:25.773   00:38:14	-- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:25.773   00:38:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:25.773   00:38:14	-- common/autotest_common.sh@10 -- # set +x
00:07:26.710   00:38:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:26.710  
00:07:26.710  real	0m2.621s
00:07:26.710  user	0m0.022s
00:07:26.710  sys	0m0.009s
00:07:26.710   00:38:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:26.710   00:38:15	-- common/autotest_common.sh@10 -- # set +x
00:07:26.710  ************************************
00:07:26.710  END TEST scheduler_create_thread
00:07:26.710  ************************************
00:07:26.710   00:38:15	-- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:26.710   00:38:15	-- scheduler/scheduler.sh@46 -- # killprocess 942745
00:07:26.710   00:38:15	-- common/autotest_common.sh@936 -- # '[' -z 942745 ']'
00:07:26.710   00:38:15	-- common/autotest_common.sh@940 -- # kill -0 942745
00:07:26.710    00:38:15	-- common/autotest_common.sh@941 -- # uname
00:07:26.710   00:38:15	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:26.710    00:38:15	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 942745
00:07:26.710   00:38:15	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:07:26.710   00:38:15	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:07:26.710   00:38:15	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 942745'
00:07:26.710  killing process with pid 942745
00:07:26.710   00:38:15	-- common/autotest_common.sh@955 -- # kill 942745
00:07:26.710   00:38:15	-- common/autotest_common.sh@960 -- # wait 942745
00:07:26.968  [2024-12-17 00:38:16.088419] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:07:27.227  POWER: Power management governor of lcore 0 has been set to 'powersave' successfully
00:07:27.227  POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original
00:07:27.227  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:07:27.227  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:07:27.227  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:07:27.227  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:07:27.227  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:07:27.227  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:07:27.227  
00:07:27.227  real	0m4.142s
00:07:27.227  user	0m6.116s
00:07:27.227  sys	0m0.529s
00:07:27.227   00:38:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:27.227   00:38:16	-- common/autotest_common.sh@10 -- # set +x
00:07:27.227  ************************************
00:07:27.227  END TEST event_scheduler
00:07:27.227  ************************************
00:07:27.227   00:38:16	-- event/event.sh@51 -- # modprobe -n nbd
00:07:27.227   00:38:16	-- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:07:27.227   00:38:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:27.227   00:38:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:27.227   00:38:16	-- common/autotest_common.sh@10 -- # set +x
00:07:27.486  ************************************
00:07:27.486  START TEST app_repeat
00:07:27.486  ************************************
00:07:27.486   00:38:16	-- common/autotest_common.sh@1114 -- # app_repeat_test
00:07:27.486   00:38:16	-- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:27.486   00:38:16	-- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:27.486   00:38:16	-- event/event.sh@13 -- # local nbd_list
00:07:27.486   00:38:16	-- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:27.486   00:38:16	-- event/event.sh@14 -- # local bdev_list
00:07:27.486   00:38:16	-- event/event.sh@15 -- # local repeat_times=4
00:07:27.486   00:38:16	-- event/event.sh@17 -- # modprobe nbd
00:07:27.486   00:38:16	-- event/event.sh@19 -- # repeat_pid=943291
00:07:27.486   00:38:16	-- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:07:27.486   00:38:16	-- event/event.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:07:27.486   00:38:16	-- event/event.sh@21 -- # echo 'Process app_repeat pid: 943291'
00:07:27.486  Process app_repeat pid: 943291
00:07:27.486   00:38:16	-- event/event.sh@23 -- # for i in {0..2}
00:07:27.486   00:38:16	-- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:07:27.486  spdk_app_start Round 0
00:07:27.486   00:38:16	-- event/event.sh@25 -- # waitforlisten 943291 /var/tmp/spdk-nbd.sock
00:07:27.486   00:38:16	-- common/autotest_common.sh@829 -- # '[' -z 943291 ']'
00:07:27.486   00:38:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:27.486   00:38:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:27.486   00:38:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:27.486  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:27.486   00:38:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:27.486   00:38:16	-- common/autotest_common.sh@10 -- # set +x
00:07:27.486  [2024-12-17 00:38:16.527658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:27.486  [2024-12-17 00:38:16.527726] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943291 ]
00:07:27.486  EAL: No free 2048 kB hugepages reported on node 1
00:07:27.486  [2024-12-17 00:38:16.623459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:27.486  [2024-12-17 00:38:16.677912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:27.486  [2024-12-17 00:38:16.677917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.423   00:38:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:28.423   00:38:17	-- common/autotest_common.sh@862 -- # return 0
00:07:28.423   00:38:17	-- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:28.682  Malloc0
00:07:28.682   00:38:17	-- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:28.941  Malloc1
00:07:28.941   00:38:18	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@12 -- # local i
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:28.941   00:38:18	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:28.941  /dev/nbd0
00:07:29.201    00:38:18	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:29.201   00:38:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:07:29.201   00:38:18	-- common/autotest_common.sh@867 -- # local i
00:07:29.201   00:38:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:07:29.201   00:38:18	-- common/autotest_common.sh@871 -- # break
00:07:29.201   00:38:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:29.201  1+0 records in
00:07:29.201  1+0 records out
00:07:29.201  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250263 s, 16.4 MB/s
00:07:29.201    00:38:18	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:29.201   00:38:18	-- common/autotest_common.sh@884 -- # size=4096
00:07:29.201   00:38:18	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:29.201   00:38:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:07:29.201   00:38:18	-- common/autotest_common.sh@887 -- # return 0
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:29.201  /dev/nbd1
00:07:29.201    00:38:18	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:29.201   00:38:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:07:29.201   00:38:18	-- common/autotest_common.sh@867 -- # local i
00:07:29.201   00:38:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:07:29.201   00:38:18	-- common/autotest_common.sh@871 -- # break
00:07:29.201   00:38:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:07:29.201   00:38:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:29.201  1+0 records in
00:07:29.201  1+0 records out
00:07:29.201  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223603 s, 18.3 MB/s
00:07:29.201    00:38:18	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:29.201   00:38:18	-- common/autotest_common.sh@884 -- # size=4096
00:07:29.201   00:38:18	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:29.201   00:38:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:07:29.201   00:38:18	-- common/autotest_common.sh@887 -- # return 0
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:29.201   00:38:18	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:29.201    00:38:18	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:29.201    00:38:18	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:29.201     00:38:18	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:29.460    00:38:18	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:29.460    {
00:07:29.460      "nbd_device": "/dev/nbd0",
00:07:29.460      "bdev_name": "Malloc0"
00:07:29.460    },
00:07:29.460    {
00:07:29.460      "nbd_device": "/dev/nbd1",
00:07:29.460      "bdev_name": "Malloc1"
00:07:29.460    }
00:07:29.460  ]'
00:07:29.460     00:38:18	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:29.460     00:38:18	-- bdev/nbd_common.sh@64 -- # echo '[
00:07:29.460    {
00:07:29.460      "nbd_device": "/dev/nbd0",
00:07:29.460      "bdev_name": "Malloc0"
00:07:29.460    },
00:07:29.460    {
00:07:29.460      "nbd_device": "/dev/nbd1",
00:07:29.460      "bdev_name": "Malloc1"
00:07:29.460    }
00:07:29.460  ]'
00:07:29.720    00:38:18	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:29.720  /dev/nbd1'
00:07:29.720     00:38:18	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:29.720  /dev/nbd1'
00:07:29.720     00:38:18	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:29.720    00:38:18	-- bdev/nbd_common.sh@65 -- # count=2
00:07:29.720    00:38:18	-- bdev/nbd_common.sh@66 -- # echo 2
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@95 -- # count=2
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@71 -- # local operation=write
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:29.720  256+0 records in
00:07:29.720  256+0 records out
00:07:29.720  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117406 s, 89.3 MB/s
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:29.720  256+0 records in
00:07:29.720  256+0 records out
00:07:29.720  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237808 s, 44.1 MB/s
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:29.720  256+0 records in
00:07:29.720  256+0 records out
00:07:29.720  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271358 s, 38.6 MB/s
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@51 -- # local i
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:29.720   00:38:18	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:29.979    00:38:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@41 -- # break
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@45 -- # return 0
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:29.979   00:38:19	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:30.237    00:38:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:30.237   00:38:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:30.237   00:38:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:30.238   00:38:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:30.238   00:38:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:30.238   00:38:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:30.238   00:38:19	-- bdev/nbd_common.sh@41 -- # break
00:07:30.238   00:38:19	-- bdev/nbd_common.sh@45 -- # return 0
00:07:30.238    00:38:19	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:30.238    00:38:19	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:30.238     00:38:19	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:30.497    00:38:19	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:30.497     00:38:19	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:30.497     00:38:19	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:30.497    00:38:19	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:30.497     00:38:19	-- bdev/nbd_common.sh@65 -- # echo ''
00:07:30.497     00:38:19	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:30.497     00:38:19	-- bdev/nbd_common.sh@65 -- # true
00:07:30.497    00:38:19	-- bdev/nbd_common.sh@65 -- # count=0
00:07:30.497    00:38:19	-- bdev/nbd_common.sh@66 -- # echo 0
00:07:30.497   00:38:19	-- bdev/nbd_common.sh@104 -- # count=0
00:07:30.497   00:38:19	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:30.497   00:38:19	-- bdev/nbd_common.sh@109 -- # return 0
00:07:30.497   00:38:19	-- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:30.756   00:38:20	-- event/event.sh@35 -- # sleep 3
00:07:31.015  [2024-12-17 00:38:20.213637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:31.015  [2024-12-17 00:38:20.261905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:31.015  [2024-12-17 00:38:20.261912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:31.274  [2024-12-17 00:38:20.313838] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:31.274  [2024-12-17 00:38:20.313893] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:33.809   00:38:23	-- event/event.sh@23 -- # for i in {0..2}
00:07:33.809   00:38:23	-- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:07:33.809  spdk_app_start Round 1
00:07:33.809   00:38:23	-- event/event.sh@25 -- # waitforlisten 943291 /var/tmp/spdk-nbd.sock
00:07:33.809   00:38:23	-- common/autotest_common.sh@829 -- # '[' -z 943291 ']'
00:07:33.809   00:38:23	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:33.809   00:38:23	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:33.809   00:38:23	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:33.809  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:33.809   00:38:23	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:33.809   00:38:23	-- common/autotest_common.sh@10 -- # set +x
00:07:34.068   00:38:23	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:34.068   00:38:23	-- common/autotest_common.sh@862 -- # return 0
00:07:34.068   00:38:23	-- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:34.327  Malloc0
00:07:34.327   00:38:23	-- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:34.586  Malloc1
00:07:34.586   00:38:23	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@12 -- # local i
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:34.586   00:38:23	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:34.845  /dev/nbd0
00:07:34.845    00:38:23	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:34.845   00:38:23	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:34.845   00:38:23	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:07:34.845   00:38:23	-- common/autotest_common.sh@867 -- # local i
00:07:34.845   00:38:23	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:07:34.845   00:38:23	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:07:34.845   00:38:23	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:07:34.845   00:38:23	-- common/autotest_common.sh@871 -- # break
00:07:34.845   00:38:23	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:07:34.845   00:38:23	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:07:34.845   00:38:23	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:34.845  1+0 records in
00:07:34.845  1+0 records out
00:07:34.845  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241759 s, 16.9 MB/s
00:07:34.845    00:38:23	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:34.845   00:38:23	-- common/autotest_common.sh@884 -- # size=4096
00:07:34.845   00:38:23	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:34.845   00:38:23	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:07:34.845   00:38:23	-- common/autotest_common.sh@887 -- # return 0
00:07:34.845   00:38:23	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:34.845   00:38:23	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:34.845   00:38:23	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:35.104  /dev/nbd1
00:07:35.104    00:38:24	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:35.104   00:38:24	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:35.104   00:38:24	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:07:35.104   00:38:24	-- common/autotest_common.sh@867 -- # local i
00:07:35.104   00:38:24	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:07:35.104   00:38:24	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:07:35.104   00:38:24	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:07:35.104   00:38:24	-- common/autotest_common.sh@871 -- # break
00:07:35.104   00:38:24	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:07:35.104   00:38:24	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:07:35.104   00:38:24	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:35.104  1+0 records in
00:07:35.104  1+0 records out
00:07:35.104  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255673 s, 16.0 MB/s
00:07:35.104    00:38:24	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:35.104   00:38:24	-- common/autotest_common.sh@884 -- # size=4096
00:07:35.104   00:38:24	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:35.104   00:38:24	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:07:35.104   00:38:24	-- common/autotest_common.sh@887 -- # return 0
00:07:35.104   00:38:24	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:35.104   00:38:24	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:35.104    00:38:24	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:35.104    00:38:24	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:35.104     00:38:24	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:35.363    00:38:24	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:35.363    {
00:07:35.363      "nbd_device": "/dev/nbd0",
00:07:35.363      "bdev_name": "Malloc0"
00:07:35.363    },
00:07:35.363    {
00:07:35.363      "nbd_device": "/dev/nbd1",
00:07:35.363      "bdev_name": "Malloc1"
00:07:35.363    }
00:07:35.363  ]'
00:07:35.363     00:38:24	-- bdev/nbd_common.sh@64 -- # echo '[
00:07:35.363    {
00:07:35.363      "nbd_device": "/dev/nbd0",
00:07:35.363      "bdev_name": "Malloc0"
00:07:35.363    },
00:07:35.363    {
00:07:35.363      "nbd_device": "/dev/nbd1",
00:07:35.363      "bdev_name": "Malloc1"
00:07:35.363    }
00:07:35.363  ]'
00:07:35.363     00:38:24	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:35.363    00:38:24	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:35.363  /dev/nbd1'
00:07:35.363     00:38:24	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:35.363  /dev/nbd1'
00:07:35.363     00:38:24	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:35.363    00:38:24	-- bdev/nbd_common.sh@65 -- # count=2
00:07:35.363    00:38:24	-- bdev/nbd_common.sh@66 -- # echo 2
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@95 -- # count=2
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@71 -- # local operation=write
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:35.363  256+0 records in
00:07:35.363  256+0 records out
00:07:35.363  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115805 s, 90.5 MB/s
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:35.363  256+0 records in
00:07:35.363  256+0 records out
00:07:35.363  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221734 s, 47.3 MB/s
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:35.363   00:38:24	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:35.622  256+0 records in
00:07:35.622  256+0 records out
00:07:35.622  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298305 s, 35.2 MB/s
00:07:35.622   00:38:24	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:35.622   00:38:24	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:35.622   00:38:24	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:35.622   00:38:24	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@51 -- # local i
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:35.623   00:38:24	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:35.882    00:38:24	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@41 -- # break
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@45 -- # return 0
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:35.882   00:38:24	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:36.141    00:38:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@41 -- # break
00:07:36.141   00:38:25	-- bdev/nbd_common.sh@45 -- # return 0
00:07:36.141    00:38:25	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:36.141    00:38:25	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:36.141     00:38:25	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:36.400    00:38:25	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:36.400     00:38:25	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:36.400     00:38:25	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:36.400    00:38:25	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:36.400     00:38:25	-- bdev/nbd_common.sh@65 -- # echo ''
00:07:36.400     00:38:25	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:36.400     00:38:25	-- bdev/nbd_common.sh@65 -- # true
00:07:36.400    00:38:25	-- bdev/nbd_common.sh@65 -- # count=0
00:07:36.400    00:38:25	-- bdev/nbd_common.sh@66 -- # echo 0
00:07:36.400   00:38:25	-- bdev/nbd_common.sh@104 -- # count=0
00:07:36.400   00:38:25	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:36.400   00:38:25	-- bdev/nbd_common.sh@109 -- # return 0
00:07:36.400   00:38:25	-- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:36.659   00:38:25	-- event/event.sh@35 -- # sleep 3
00:07:36.659  [2024-12-17 00:38:25.882570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:36.919  [2024-12-17 00:38:25.931880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:36.919  [2024-12-17 00:38:25.931893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:36.919  [2024-12-17 00:38:25.984002] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:36.919  [2024-12-17 00:38:25.984059] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:39.455   00:38:28	-- event/event.sh@23 -- # for i in {0..2}
00:07:39.455   00:38:28	-- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:07:39.455  spdk_app_start Round 2
00:07:39.455   00:38:28	-- event/event.sh@25 -- # waitforlisten 943291 /var/tmp/spdk-nbd.sock
00:07:39.455   00:38:28	-- common/autotest_common.sh@829 -- # '[' -z 943291 ']'
00:07:39.455   00:38:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:39.455   00:38:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:39.455   00:38:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:39.455  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:39.455   00:38:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:39.455   00:38:28	-- common/autotest_common.sh@10 -- # set +x
00:07:39.714   00:38:28	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:39.714   00:38:28	-- common/autotest_common.sh@862 -- # return 0
00:07:39.714   00:38:28	-- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:39.974  Malloc0
00:07:39.974   00:38:29	-- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:40.233  Malloc1
00:07:40.233   00:38:29	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@12 -- # local i
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:40.233   00:38:29	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:40.492  /dev/nbd0
00:07:40.492    00:38:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:40.492   00:38:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:40.492   00:38:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:07:40.492   00:38:29	-- common/autotest_common.sh@867 -- # local i
00:07:40.492   00:38:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:07:40.493   00:38:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:07:40.493   00:38:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:07:40.493   00:38:29	-- common/autotest_common.sh@871 -- # break
00:07:40.493   00:38:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:07:40.493   00:38:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:07:40.493   00:38:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:40.493  1+0 records in
00:07:40.493  1+0 records out
00:07:40.493  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252104 s, 16.2 MB/s
00:07:40.493    00:38:29	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:40.493   00:38:29	-- common/autotest_common.sh@884 -- # size=4096
00:07:40.493   00:38:29	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:40.493   00:38:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:07:40.493   00:38:29	-- common/autotest_common.sh@887 -- # return 0
00:07:40.493   00:38:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:40.493   00:38:29	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:40.493   00:38:29	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:40.752  /dev/nbd1
00:07:40.752    00:38:29	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:40.752   00:38:29	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:40.752   00:38:29	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:07:40.752   00:38:29	-- common/autotest_common.sh@867 -- # local i
00:07:40.752   00:38:29	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:07:40.752   00:38:29	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:07:40.752   00:38:29	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:07:40.752   00:38:29	-- common/autotest_common.sh@871 -- # break
00:07:40.752   00:38:29	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:07:40.752   00:38:29	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:07:40.752   00:38:29	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:40.752  1+0 records in
00:07:40.752  1+0 records out
00:07:40.752  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261307 s, 15.7 MB/s
00:07:40.752    00:38:29	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:40.752   00:38:29	-- common/autotest_common.sh@884 -- # size=4096
00:07:40.752   00:38:29	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:07:40.752   00:38:29	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:07:40.752   00:38:29	-- common/autotest_common.sh@887 -- # return 0
00:07:40.752   00:38:29	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:40.752   00:38:29	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:40.752    00:38:29	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:40.752    00:38:29	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:40.752     00:38:29	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:41.011    00:38:30	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:41.011    {
00:07:41.011      "nbd_device": "/dev/nbd0",
00:07:41.011      "bdev_name": "Malloc0"
00:07:41.011    },
00:07:41.011    {
00:07:41.011      "nbd_device": "/dev/nbd1",
00:07:41.011      "bdev_name": "Malloc1"
00:07:41.011    }
00:07:41.011  ]'
00:07:41.011     00:38:30	-- bdev/nbd_common.sh@64 -- # echo '[
00:07:41.011    {
00:07:41.011      "nbd_device": "/dev/nbd0",
00:07:41.011      "bdev_name": "Malloc0"
00:07:41.011    },
00:07:41.011    {
00:07:41.011      "nbd_device": "/dev/nbd1",
00:07:41.011      "bdev_name": "Malloc1"
00:07:41.011    }
00:07:41.011  ]'
00:07:41.011     00:38:30	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:41.011    00:38:30	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:41.011  /dev/nbd1'
00:07:41.270     00:38:30	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:41.270  /dev/nbd1'
00:07:41.270     00:38:30	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:41.270    00:38:30	-- bdev/nbd_common.sh@65 -- # count=2
00:07:41.270    00:38:30	-- bdev/nbd_common.sh@66 -- # echo 2
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@95 -- # count=2
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@71 -- # local operation=write
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:41.270  256+0 records in
00:07:41.270  256+0 records out
00:07:41.270  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010295 s, 102 MB/s
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:41.270  256+0 records in
00:07:41.270  256+0 records out
00:07:41.270  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290288 s, 36.1 MB/s
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:41.270  256+0 records in
00:07:41.270  256+0 records out
00:07:41.270  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254579 s, 41.2 MB/s
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:07:41.270   00:38:30	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:41.271   00:38:30	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:41.271   00:38:30	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:41.271   00:38:30	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:41.271   00:38:30	-- bdev/nbd_common.sh@51 -- # local i
00:07:41.271   00:38:30	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:41.271   00:38:30	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:41.529    00:38:30	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:41.529   00:38:30	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:41.529   00:38:30	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:41.529   00:38:30	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:41.530   00:38:30	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:41.530   00:38:30	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:41.530   00:38:30	-- bdev/nbd_common.sh@41 -- # break
00:07:41.530   00:38:30	-- bdev/nbd_common.sh@45 -- # return 0
00:07:41.530   00:38:30	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:41.530   00:38:30	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:41.789    00:38:30	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@41 -- # break
00:07:41.789   00:38:30	-- bdev/nbd_common.sh@45 -- # return 0
00:07:41.789    00:38:30	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:41.789    00:38:30	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:41.789     00:38:30	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:42.048    00:38:31	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:42.048     00:38:31	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:42.048     00:38:31	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:42.048    00:38:31	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:42.048     00:38:31	-- bdev/nbd_common.sh@65 -- # echo ''
00:07:42.048     00:38:31	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:42.048     00:38:31	-- bdev/nbd_common.sh@65 -- # true
00:07:42.048    00:38:31	-- bdev/nbd_common.sh@65 -- # count=0
00:07:42.048    00:38:31	-- bdev/nbd_common.sh@66 -- # echo 0
00:07:42.048   00:38:31	-- bdev/nbd_common.sh@104 -- # count=0
00:07:42.048   00:38:31	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:42.048   00:38:31	-- bdev/nbd_common.sh@109 -- # return 0
00:07:42.048   00:38:31	-- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:42.307   00:38:31	-- event/event.sh@35 -- # sleep 3
00:07:42.566  [2024-12-17 00:38:31.683385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:42.566  [2024-12-17 00:38:31.728740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:42.566  [2024-12-17 00:38:31.728746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:42.566  [2024-12-17 00:38:31.773713] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:42.566  [2024-12-17 00:38:31.773760] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:45.854   00:38:34	-- event/event.sh@38 -- # waitforlisten 943291 /var/tmp/spdk-nbd.sock
00:07:45.854   00:38:34	-- common/autotest_common.sh@829 -- # '[' -z 943291 ']'
00:07:45.854   00:38:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:45.854   00:38:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:45.854   00:38:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:45.854  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:45.854   00:38:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:45.854   00:38:34	-- common/autotest_common.sh@10 -- # set +x
00:07:45.854   00:38:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:45.854   00:38:34	-- common/autotest_common.sh@862 -- # return 0
00:07:45.854   00:38:34	-- event/event.sh@39 -- # killprocess 943291
00:07:45.854   00:38:34	-- common/autotest_common.sh@936 -- # '[' -z 943291 ']'
00:07:45.854   00:38:34	-- common/autotest_common.sh@940 -- # kill -0 943291
00:07:45.854    00:38:34	-- common/autotest_common.sh@941 -- # uname
00:07:45.854   00:38:34	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:45.854    00:38:34	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 943291
00:07:45.854   00:38:34	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:45.854   00:38:34	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:45.854   00:38:34	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 943291'
00:07:45.854  killing process with pid 943291
00:07:45.854   00:38:34	-- common/autotest_common.sh@955 -- # kill 943291
00:07:45.854   00:38:34	-- common/autotest_common.sh@960 -- # wait 943291
00:07:45.854  spdk_app_start is called in Round 0.
00:07:45.854  Shutdown signal received, stop current app iteration
00:07:45.854  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:07:45.854  spdk_app_start is called in Round 1.
00:07:45.854  Shutdown signal received, stop current app iteration
00:07:45.854  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:07:45.854  spdk_app_start is called in Round 2.
00:07:45.854  Shutdown signal received, stop current app iteration
00:07:45.854  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:07:45.854  spdk_app_start is called in Round 3.
00:07:45.854  Shutdown signal received, stop current app iteration
00:07:45.854   00:38:34	-- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:07:45.854   00:38:34	-- event/event.sh@42 -- # return 0
00:07:45.854  
00:07:45.854  real	0m18.480s
00:07:45.854  user	0m40.439s
00:07:45.854  sys	0m3.643s
00:07:45.854   00:38:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:45.854   00:38:34	-- common/autotest_common.sh@10 -- # set +x
00:07:45.854  ************************************
00:07:45.854  END TEST app_repeat
00:07:45.854  ************************************
00:07:45.854   00:38:35	-- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:07:45.854   00:38:35	-- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh
00:07:45.854   00:38:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:45.854   00:38:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:45.854   00:38:35	-- common/autotest_common.sh@10 -- # set +x
00:07:45.854  ************************************
00:07:45.854  START TEST cpu_locks
00:07:45.854  ************************************
00:07:45.854   00:38:35	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh
00:07:46.112  * Looking for test storage...
00:07:46.112  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event
00:07:46.112    00:38:35	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:46.112     00:38:35	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:46.112     00:38:35	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:46.112    00:38:35	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:46.112    00:38:35	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:46.112    00:38:35	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:46.112    00:38:35	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:46.112    00:38:35	-- scripts/common.sh@335 -- # IFS=.-:
00:07:46.112    00:38:35	-- scripts/common.sh@335 -- # read -ra ver1
00:07:46.112    00:38:35	-- scripts/common.sh@336 -- # IFS=.-:
00:07:46.112    00:38:35	-- scripts/common.sh@336 -- # read -ra ver2
00:07:46.112    00:38:35	-- scripts/common.sh@337 -- # local 'op=<'
00:07:46.112    00:38:35	-- scripts/common.sh@339 -- # ver1_l=2
00:07:46.112    00:38:35	-- scripts/common.sh@340 -- # ver2_l=1
00:07:46.112    00:38:35	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:46.112    00:38:35	-- scripts/common.sh@343 -- # case "$op" in
00:07:46.112    00:38:35	-- scripts/common.sh@344 -- # : 1
00:07:46.112    00:38:35	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:46.112    00:38:35	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:46.112     00:38:35	-- scripts/common.sh@364 -- # decimal 1
00:07:46.112     00:38:35	-- scripts/common.sh@352 -- # local d=1
00:07:46.112     00:38:35	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:46.112     00:38:35	-- scripts/common.sh@354 -- # echo 1
00:07:46.112    00:38:35	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:46.112     00:38:35	-- scripts/common.sh@365 -- # decimal 2
00:07:46.112     00:38:35	-- scripts/common.sh@352 -- # local d=2
00:07:46.112     00:38:35	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:46.112     00:38:35	-- scripts/common.sh@354 -- # echo 2
00:07:46.112    00:38:35	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:46.112    00:38:35	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:46.112    00:38:35	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:46.112    00:38:35	-- scripts/common.sh@367 -- # return 0
00:07:46.112    00:38:35	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:46.112    00:38:35	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:46.112  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.112  		--rc genhtml_branch_coverage=1
00:07:46.112  		--rc genhtml_function_coverage=1
00:07:46.112  		--rc genhtml_legend=1
00:07:46.112  		--rc geninfo_all_blocks=1
00:07:46.112  		--rc geninfo_unexecuted_blocks=1
00:07:46.112  		
00:07:46.112  		'
00:07:46.112    00:38:35	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:46.112  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.112  		--rc genhtml_branch_coverage=1
00:07:46.112  		--rc genhtml_function_coverage=1
00:07:46.112  		--rc genhtml_legend=1
00:07:46.112  		--rc geninfo_all_blocks=1
00:07:46.112  		--rc geninfo_unexecuted_blocks=1
00:07:46.112  		
00:07:46.112  		'
00:07:46.112    00:38:35	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:46.112  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.112  		--rc genhtml_branch_coverage=1
00:07:46.112  		--rc genhtml_function_coverage=1
00:07:46.112  		--rc genhtml_legend=1
00:07:46.112  		--rc geninfo_all_blocks=1
00:07:46.112  		--rc geninfo_unexecuted_blocks=1
00:07:46.112  		
00:07:46.112  		'
00:07:46.112    00:38:35	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:46.112  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:46.112  		--rc genhtml_branch_coverage=1
00:07:46.112  		--rc genhtml_function_coverage=1
00:07:46.112  		--rc genhtml_legend=1
00:07:46.112  		--rc geninfo_all_blocks=1
00:07:46.112  		--rc geninfo_unexecuted_blocks=1
00:07:46.112  		
00:07:46.112  		'
00:07:46.112   00:38:35	-- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:07:46.112   00:38:35	-- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:07:46.112   00:38:35	-- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:07:46.112   00:38:35	-- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:07:46.112   00:38:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:46.112   00:38:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:46.112   00:38:35	-- common/autotest_common.sh@10 -- # set +x
00:07:46.112  ************************************
00:07:46.112  START TEST default_locks
00:07:46.112  ************************************
00:07:46.112   00:38:35	-- common/autotest_common.sh@1114 -- # default_locks
00:07:46.112   00:38:35	-- event/cpu_locks.sh@46 -- # spdk_tgt_pid=945978
00:07:46.112   00:38:35	-- event/cpu_locks.sh@47 -- # waitforlisten 945978
00:07:46.112   00:38:35	-- common/autotest_common.sh@829 -- # '[' -z 945978 ']'
00:07:46.112   00:38:35	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:46.112   00:38:35	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:46.113   00:38:35	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:46.113  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:46.113   00:38:35	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:46.113   00:38:35	-- common/autotest_common.sh@10 -- # set +x
00:07:46.113   00:38:35	-- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:46.113  [2024-12-17 00:38:35.283462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:46.113  [2024-12-17 00:38:35.283534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945978 ]
00:07:46.113  EAL: No free 2048 kB hugepages reported on node 1
00:07:46.371  [2024-12-17 00:38:35.390325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.371  [2024-12-17 00:38:35.437865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:46.371  [2024-12-17 00:38:35.438049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:46.371  [2024-12-17 00:38:35.600589] 'OCF_Core' volume operations registered
00:07:46.371  [2024-12-17 00:38:35.602897] 'OCF_Cache' volume operations registered
00:07:46.371  [2024-12-17 00:38:35.605711] 'OCF Composite' volume operations registered
00:07:46.371  [2024-12-17 00:38:35.608091] 'SPDK_block_device' volume operations registered
00:07:47.306   00:38:36	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:47.307   00:38:36	-- common/autotest_common.sh@862 -- # return 0
00:07:47.307   00:38:36	-- event/cpu_locks.sh@49 -- # locks_exist 945978
00:07:47.307   00:38:36	-- event/cpu_locks.sh@22 -- # lslocks -p 945978
00:07:47.307   00:38:36	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:47.872  lslocks: write error
00:07:47.872   00:38:37	-- event/cpu_locks.sh@50 -- # killprocess 945978
00:07:47.872   00:38:37	-- common/autotest_common.sh@936 -- # '[' -z 945978 ']'
00:07:47.872   00:38:37	-- common/autotest_common.sh@940 -- # kill -0 945978
00:07:47.872    00:38:37	-- common/autotest_common.sh@941 -- # uname
00:07:47.872   00:38:37	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:47.872    00:38:37	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 945978
00:07:47.872   00:38:37	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:47.872   00:38:37	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:47.872   00:38:37	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 945978'
00:07:47.872  killing process with pid 945978
00:07:47.872   00:38:37	-- common/autotest_common.sh@955 -- # kill 945978
00:07:47.873   00:38:37	-- common/autotest_common.sh@960 -- # wait 945978
00:07:48.441   00:38:37	-- event/cpu_locks.sh@52 -- # NOT waitforlisten 945978
00:07:48.441   00:38:37	-- common/autotest_common.sh@650 -- # local es=0
00:07:48.441   00:38:37	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 945978
00:07:48.441   00:38:37	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:07:48.441   00:38:37	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:48.441    00:38:37	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:07:48.441   00:38:37	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:48.441   00:38:37	-- common/autotest_common.sh@653 -- # waitforlisten 945978
00:07:48.441   00:38:37	-- common/autotest_common.sh@829 -- # '[' -z 945978 ']'
00:07:48.441   00:38:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:48.441   00:38:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:48.441   00:38:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:48.441  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:48.441   00:38:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:48.441   00:38:37	-- common/autotest_common.sh@10 -- # set +x
00:07:48.441  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (945978) - No such process
00:07:48.441  ERROR: process (pid: 945978) is no longer running
00:07:48.441   00:38:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:48.441   00:38:37	-- common/autotest_common.sh@862 -- # return 1
00:07:48.441   00:38:37	-- common/autotest_common.sh@653 -- # es=1
00:07:48.441   00:38:37	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:48.441   00:38:37	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:48.441   00:38:37	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:48.441   00:38:37	-- event/cpu_locks.sh@54 -- # no_locks
00:07:48.441   00:38:37	-- event/cpu_locks.sh@26 -- # lock_files=()
00:07:48.441   00:38:37	-- event/cpu_locks.sh@26 -- # local lock_files
00:07:48.441   00:38:37	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:48.441  
00:07:48.441  real	0m2.415s
00:07:48.441  user	0m2.541s
00:07:48.441  sys	0m0.938s
00:07:48.441   00:38:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.441   00:38:37	-- common/autotest_common.sh@10 -- # set +x
00:07:48.441  ************************************
00:07:48.441  END TEST default_locks
00:07:48.441  ************************************
00:07:48.441   00:38:37	-- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:07:48.441   00:38:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:48.441   00:38:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.441   00:38:37	-- common/autotest_common.sh@10 -- # set +x
00:07:48.441  ************************************
00:07:48.441  START TEST default_locks_via_rpc
00:07:48.441  ************************************
00:07:48.441   00:38:37	-- common/autotest_common.sh@1114 -- # default_locks_via_rpc
00:07:48.441   00:38:37	-- event/cpu_locks.sh@62 -- # spdk_tgt_pid=946320
00:07:48.441   00:38:37	-- event/cpu_locks.sh@63 -- # waitforlisten 946320
00:07:48.441   00:38:37	-- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:48.441   00:38:37	-- common/autotest_common.sh@829 -- # '[' -z 946320 ']'
00:07:48.441   00:38:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:48.441   00:38:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:48.441   00:38:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:48.442  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:48.442   00:38:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:48.442   00:38:37	-- common/autotest_common.sh@10 -- # set +x
00:07:48.701  [2024-12-17 00:38:37.751389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:48.701  [2024-12-17 00:38:37.751470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946320 ]
00:07:48.701  EAL: No free 2048 kB hugepages reported on node 1
00:07:48.701  [2024-12-17 00:38:37.861169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:48.701  [2024-12-17 00:38:37.909922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:48.701  [2024-12-17 00:38:37.910084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:48.959  [2024-12-17 00:38:38.073131] 'OCF_Core' volume operations registered
00:07:48.959  [2024-12-17 00:38:38.075534] 'OCF_Cache' volume operations registered
00:07:48.959  [2024-12-17 00:38:38.078467] 'OCF Composite' volume operations registered
00:07:48.959  [2024-12-17 00:38:38.080920] 'SPDK_block_device' volume operations registered
00:07:49.526   00:38:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:49.526   00:38:38	-- common/autotest_common.sh@862 -- # return 0
00:07:49.526   00:38:38	-- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:07:49.526   00:38:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:49.526   00:38:38	-- common/autotest_common.sh@10 -- # set +x
00:07:49.526   00:38:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:49.526   00:38:38	-- event/cpu_locks.sh@67 -- # no_locks
00:07:49.526   00:38:38	-- event/cpu_locks.sh@26 -- # lock_files=()
00:07:49.526   00:38:38	-- event/cpu_locks.sh@26 -- # local lock_files
00:07:49.526   00:38:38	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:49.526   00:38:38	-- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:07:49.526   00:38:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:49.526   00:38:38	-- common/autotest_common.sh@10 -- # set +x
00:07:49.526   00:38:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:49.526   00:38:38	-- event/cpu_locks.sh@71 -- # locks_exist 946320
00:07:49.526   00:38:38	-- event/cpu_locks.sh@22 -- # lslocks -p 946320
00:07:49.526   00:38:38	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:50.462   00:38:39	-- event/cpu_locks.sh@73 -- # killprocess 946320
00:07:50.462   00:38:39	-- common/autotest_common.sh@936 -- # '[' -z 946320 ']'
00:07:50.462   00:38:39	-- common/autotest_common.sh@940 -- # kill -0 946320
00:07:50.462    00:38:39	-- common/autotest_common.sh@941 -- # uname
00:07:50.462   00:38:39	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:50.462    00:38:39	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 946320
00:07:50.462   00:38:39	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:50.462   00:38:39	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:50.462   00:38:39	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 946320'
00:07:50.462  killing process with pid 946320
00:07:50.462   00:38:39	-- common/autotest_common.sh@955 -- # kill 946320
00:07:50.462   00:38:39	-- common/autotest_common.sh@960 -- # wait 946320
00:07:51.029  
00:07:51.029  real	0m2.309s
00:07:51.029  user	0m2.344s
00:07:51.029  sys	0m0.962s
00:07:51.029   00:38:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:51.029   00:38:40	-- common/autotest_common.sh@10 -- # set +x
00:07:51.029  ************************************
00:07:51.029  END TEST default_locks_via_rpc
00:07:51.029  ************************************
00:07:51.029   00:38:40	-- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:07:51.029   00:38:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:51.029   00:38:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:51.029   00:38:40	-- common/autotest_common.sh@10 -- # set +x
00:07:51.029  ************************************
00:07:51.029  START TEST non_locking_app_on_locked_coremask
00:07:51.029  ************************************
00:07:51.029   00:38:40	-- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask
00:07:51.029   00:38:40	-- event/cpu_locks.sh@80 -- # spdk_tgt_pid=946705
00:07:51.029   00:38:40	-- event/cpu_locks.sh@81 -- # waitforlisten 946705 /var/tmp/spdk.sock
00:07:51.029   00:38:40	-- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:51.029   00:38:40	-- common/autotest_common.sh@829 -- # '[' -z 946705 ']'
00:07:51.029   00:38:40	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:51.029   00:38:40	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:51.029   00:38:40	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:51.029  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:51.029   00:38:40	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:51.029   00:38:40	-- common/autotest_common.sh@10 -- # set +x
00:07:51.029  [2024-12-17 00:38:40.110463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:51.030  [2024-12-17 00:38:40.110547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946705 ]
00:07:51.030  EAL: No free 2048 kB hugepages reported on node 1
00:07:51.030  [2024-12-17 00:38:40.220696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:51.030  [2024-12-17 00:38:40.269316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:51.030  [2024-12-17 00:38:40.269486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:51.288  [2024-12-17 00:38:40.428343] 'OCF_Core' volume operations registered
00:07:51.288  [2024-12-17 00:38:40.430504] 'OCF_Cache' volume operations registered
00:07:51.288  [2024-12-17 00:38:40.433094] 'OCF Composite' volume operations registered
00:07:51.288  [2024-12-17 00:38:40.435297] 'SPDK_block_device' volume operations registered
00:07:51.855   00:38:41	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:51.856   00:38:41	-- common/autotest_common.sh@862 -- # return 0
00:07:51.856   00:38:41	-- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=946886
00:07:51.856   00:38:41	-- event/cpu_locks.sh@85 -- # waitforlisten 946886 /var/tmp/spdk2.sock
00:07:51.856   00:38:41	-- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:07:51.856   00:38:41	-- common/autotest_common.sh@829 -- # '[' -z 946886 ']'
00:07:51.856   00:38:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:51.856   00:38:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:51.856   00:38:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:51.856  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:51.856   00:38:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:51.856   00:38:41	-- common/autotest_common.sh@10 -- # set +x
00:07:52.114  [2024-12-17 00:38:41.128089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:52.114  [2024-12-17 00:38:41.128166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946886 ]
00:07:52.114  EAL: No free 2048 kB hugepages reported on node 1
00:07:52.114  [2024-12-17 00:38:41.271701] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:52.114  [2024-12-17 00:38:41.271736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:52.115  [2024-12-17 00:38:41.366268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:52.115  [2024-12-17 00:38:41.366450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:52.681  [2024-12-17 00:38:41.688341] 'OCF_Core' volume operations registered
00:07:52.681  [2024-12-17 00:38:41.694520] 'OCF_Cache' volume operations registered
00:07:52.681  [2024-12-17 00:38:41.701116] 'OCF Composite' volume operations registered
00:07:52.681  [2024-12-17 00:38:41.703279] 'SPDK_block_device' volume operations registered
00:07:52.940   00:38:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:52.940   00:38:42	-- common/autotest_common.sh@862 -- # return 0
00:07:52.940   00:38:42	-- event/cpu_locks.sh@87 -- # locks_exist 946705
00:07:52.940   00:38:42	-- event/cpu_locks.sh@22 -- # lslocks -p 946705
00:07:52.940   00:38:42	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:55.472  lslocks: write error
00:07:55.472   00:38:44	-- event/cpu_locks.sh@89 -- # killprocess 946705
00:07:55.472   00:38:44	-- common/autotest_common.sh@936 -- # '[' -z 946705 ']'
00:07:55.472   00:38:44	-- common/autotest_common.sh@940 -- # kill -0 946705
00:07:55.472    00:38:44	-- common/autotest_common.sh@941 -- # uname
00:07:55.472   00:38:44	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:55.472    00:38:44	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 946705
00:07:55.472   00:38:44	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:55.472   00:38:44	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:55.472   00:38:44	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 946705'
00:07:55.472  killing process with pid 946705
00:07:55.472   00:38:44	-- common/autotest_common.sh@955 -- # kill 946705
00:07:55.472   00:38:44	-- common/autotest_common.sh@960 -- # wait 946705
00:07:56.037   00:38:45	-- event/cpu_locks.sh@90 -- # killprocess 946886
00:07:56.037   00:38:45	-- common/autotest_common.sh@936 -- # '[' -z 946886 ']'
00:07:56.037   00:38:45	-- common/autotest_common.sh@940 -- # kill -0 946886
00:07:56.037    00:38:45	-- common/autotest_common.sh@941 -- # uname
00:07:56.037   00:38:45	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:56.037    00:38:45	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 946886
00:07:56.037   00:38:45	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:56.037   00:38:45	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:56.037   00:38:45	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 946886'
00:07:56.037  killing process with pid 946886
00:07:56.037   00:38:45	-- common/autotest_common.sh@955 -- # kill 946886
00:07:56.037   00:38:45	-- common/autotest_common.sh@960 -- # wait 946886
00:07:56.603  
00:07:56.603  real	0m5.642s
00:07:56.603  user	0m6.053s
00:07:56.603  sys	0m2.220s
00:07:56.603   00:38:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:56.603   00:38:45	-- common/autotest_common.sh@10 -- # set +x
00:07:56.603  ************************************
00:07:56.603  END TEST non_locking_app_on_locked_coremask
00:07:56.603  ************************************
00:07:56.603   00:38:45	-- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:07:56.603   00:38:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:56.603   00:38:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:56.603   00:38:45	-- common/autotest_common.sh@10 -- # set +x
00:07:56.603  ************************************
00:07:56.603  START TEST locking_app_on_unlocked_coremask
00:07:56.603  ************************************
00:07:56.603   00:38:45	-- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask
00:07:56.603   00:38:45	-- event/cpu_locks.sh@98 -- # spdk_tgt_pid=947468
00:07:56.603   00:38:45	-- event/cpu_locks.sh@99 -- # waitforlisten 947468 /var/tmp/spdk.sock
00:07:56.603   00:38:45	-- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:07:56.603   00:38:45	-- common/autotest_common.sh@829 -- # '[' -z 947468 ']'
00:07:56.603   00:38:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:56.603   00:38:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:56.603   00:38:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:56.603  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:56.603   00:38:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:56.603   00:38:45	-- common/autotest_common.sh@10 -- # set +x
00:07:56.603  [2024-12-17 00:38:45.803217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:56.603  [2024-12-17 00:38:45.803297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947468 ]
00:07:56.603  EAL: No free 2048 kB hugepages reported on node 1
00:07:56.862  [2024-12-17 00:38:45.909935] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:56.862  [2024-12-17 00:38:45.909978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:56.862  [2024-12-17 00:38:45.960755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:56.862  [2024-12-17 00:38:45.960926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:57.120  [2024-12-17 00:38:46.130517] 'OCF_Core' volume operations registered
00:07:57.120  [2024-12-17 00:38:46.132914] 'OCF_Cache' volume operations registered
00:07:57.120  [2024-12-17 00:38:46.135765] 'OCF Composite' volume operations registered
00:07:57.120  [2024-12-17 00:38:46.138255] 'SPDK_block_device' volume operations registered
00:07:57.687   00:38:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:57.687   00:38:46	-- common/autotest_common.sh@862 -- # return 0
00:07:57.687   00:38:46	-- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:57.687   00:38:46	-- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=947647
00:07:57.687   00:38:46	-- event/cpu_locks.sh@103 -- # waitforlisten 947647 /var/tmp/spdk2.sock
00:07:57.687   00:38:46	-- common/autotest_common.sh@829 -- # '[' -z 947647 ']'
00:07:57.687   00:38:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:57.687   00:38:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:57.687   00:38:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:57.687  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:57.687   00:38:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:57.687   00:38:46	-- common/autotest_common.sh@10 -- # set +x
00:07:57.687  [2024-12-17 00:38:46.807806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:57.687  [2024-12-17 00:38:46.807880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947647 ]
00:07:57.687  EAL: No free 2048 kB hugepages reported on node 1
00:07:57.945  [2024-12-17 00:38:46.952871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:57.945  [2024-12-17 00:38:47.043424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:57.945  [2024-12-17 00:38:47.043587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:58.204  [2024-12-17 00:38:47.381043] 'OCF_Core' volume operations registered
00:07:58.204  [2024-12-17 00:38:47.387793] 'OCF_Cache' volume operations registered
00:07:58.204  [2024-12-17 00:38:47.390737] 'OCF Composite' volume operations registered
00:07:58.204  [2024-12-17 00:38:47.397211] 'SPDK_block_device' volume operations registered
00:07:58.771   00:38:47	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:58.771   00:38:47	-- common/autotest_common.sh@862 -- # return 0
00:07:58.771   00:38:47	-- event/cpu_locks.sh@105 -- # locks_exist 947647
00:07:58.771   00:38:47	-- event/cpu_locks.sh@22 -- # lslocks -p 947647
00:07:58.771   00:38:47	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:00.673  lslocks: write error
00:08:00.673   00:38:49	-- event/cpu_locks.sh@107 -- # killprocess 947468
00:08:00.673   00:38:49	-- common/autotest_common.sh@936 -- # '[' -z 947468 ']'
00:08:00.673   00:38:49	-- common/autotest_common.sh@940 -- # kill -0 947468
00:08:00.673    00:38:49	-- common/autotest_common.sh@941 -- # uname
00:08:00.673   00:38:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:00.673    00:38:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 947468
00:08:00.673   00:38:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:00.673   00:38:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:00.673   00:38:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 947468'
00:08:00.673  killing process with pid 947468
00:08:00.673   00:38:49	-- common/autotest_common.sh@955 -- # kill 947468
00:08:00.673   00:38:49	-- common/autotest_common.sh@960 -- # wait 947468
00:08:02.050   00:38:50	-- event/cpu_locks.sh@108 -- # killprocess 947647
00:08:02.050   00:38:50	-- common/autotest_common.sh@936 -- # '[' -z 947647 ']'
00:08:02.050   00:38:50	-- common/autotest_common.sh@940 -- # kill -0 947647
00:08:02.050    00:38:50	-- common/autotest_common.sh@941 -- # uname
00:08:02.050   00:38:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:02.050    00:38:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 947647
00:08:02.050   00:38:50	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:02.050   00:38:50	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:02.050   00:38:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 947647'
00:08:02.050  killing process with pid 947647
00:08:02.050   00:38:50	-- common/autotest_common.sh@955 -- # kill 947647
00:08:02.050   00:38:50	-- common/autotest_common.sh@960 -- # wait 947647
00:08:02.308  
00:08:02.308  real	0m5.715s
00:08:02.308  user	0m6.115s
00:08:02.308  sys	0m2.246s
00:08:02.308   00:38:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:02.308   00:38:51	-- common/autotest_common.sh@10 -- # set +x
00:08:02.308  ************************************
00:08:02.308  END TEST locking_app_on_unlocked_coremask
00:08:02.308  ************************************
00:08:02.308   00:38:51	-- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:08:02.308   00:38:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:02.308   00:38:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:02.308   00:38:51	-- common/autotest_common.sh@10 -- # set +x
00:08:02.308  ************************************
00:08:02.308  START TEST locking_app_on_locked_coremask
00:08:02.308  ************************************
00:08:02.308   00:38:51	-- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask
00:08:02.308   00:38:51	-- event/cpu_locks.sh@115 -- # spdk_tgt_pid=948292
00:08:02.308   00:38:51	-- event/cpu_locks.sh@116 -- # waitforlisten 948292 /var/tmp/spdk.sock
00:08:02.308   00:38:51	-- common/autotest_common.sh@829 -- # '[' -z 948292 ']'
00:08:02.308   00:38:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:02.308   00:38:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:02.308   00:38:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:02.308  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:02.308   00:38:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:02.308   00:38:51	-- common/autotest_common.sh@10 -- # set +x
00:08:02.308   00:38:51	-- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:02.308  [2024-12-17 00:38:51.566281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:02.308  [2024-12-17 00:38:51.566355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948292 ]
00:08:02.567  EAL: No free 2048 kB hugepages reported on node 1
00:08:02.567  [2024-12-17 00:38:51.674393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:02.567  [2024-12-17 00:38:51.721919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:02.567  [2024-12-17 00:38:51.722079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:02.826  [2024-12-17 00:38:51.882758] 'OCF_Core' volume operations registered
00:08:02.826  [2024-12-17 00:38:51.885038] 'OCF_Cache' volume operations registered
00:08:02.826  [2024-12-17 00:38:51.887715] 'OCF Composite' volume operations registered
00:08:02.826  [2024-12-17 00:38:51.889991] 'SPDK_block_device' volume operations registered
00:08:03.394   00:38:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:03.394   00:38:52	-- common/autotest_common.sh@862 -- # return 0
00:08:03.394   00:38:52	-- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=948406
00:08:03.394   00:38:52	-- event/cpu_locks.sh@120 -- # NOT waitforlisten 948406 /var/tmp/spdk2.sock
00:08:03.394   00:38:52	-- common/autotest_common.sh@650 -- # local es=0
00:08:03.394   00:38:52	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 948406 /var/tmp/spdk2.sock
00:08:03.394   00:38:52	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:08:03.394   00:38:52	-- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:03.394   00:38:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:03.394    00:38:52	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:08:03.394   00:38:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:03.394   00:38:52	-- common/autotest_common.sh@653 -- # waitforlisten 948406 /var/tmp/spdk2.sock
00:08:03.394   00:38:52	-- common/autotest_common.sh@829 -- # '[' -z 948406 ']'
00:08:03.394   00:38:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:03.394   00:38:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:03.394   00:38:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:03.394  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:03.394   00:38:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:03.394   00:38:52	-- common/autotest_common.sh@10 -- # set +x
00:08:03.394  [2024-12-17 00:38:52.558307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:03.394  [2024-12-17 00:38:52.558379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948406 ]
00:08:03.394  EAL: No free 2048 kB hugepages reported on node 1
00:08:03.652  [2024-12-17 00:38:52.699822] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 948292 has claimed it.
00:08:03.652  [2024-12-17 00:38:52.699871] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:04.219  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (948406) - No such process
00:08:04.219  ERROR: process (pid: 948406) is no longer running
00:08:04.219   00:38:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:04.219   00:38:53	-- common/autotest_common.sh@862 -- # return 1
00:08:04.219   00:38:53	-- common/autotest_common.sh@653 -- # es=1
00:08:04.219   00:38:53	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:04.219   00:38:53	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:04.219   00:38:53	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:04.219   00:38:53	-- event/cpu_locks.sh@122 -- # locks_exist 948292
00:08:04.219   00:38:53	-- event/cpu_locks.sh@22 -- # lslocks -p 948292
00:08:04.219   00:38:53	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:05.155  lslocks: write error
00:08:05.155   00:38:54	-- event/cpu_locks.sh@124 -- # killprocess 948292
00:08:05.155   00:38:54	-- common/autotest_common.sh@936 -- # '[' -z 948292 ']'
00:08:05.155   00:38:54	-- common/autotest_common.sh@940 -- # kill -0 948292
00:08:05.155    00:38:54	-- common/autotest_common.sh@941 -- # uname
00:08:05.155   00:38:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:05.155    00:38:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 948292
00:08:05.155   00:38:54	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:05.155   00:38:54	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:05.155   00:38:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 948292'
00:08:05.155  killing process with pid 948292
00:08:05.155   00:38:54	-- common/autotest_common.sh@955 -- # kill 948292
00:08:05.155   00:38:54	-- common/autotest_common.sh@960 -- # wait 948292
00:08:05.723  
00:08:05.723  real	0m3.210s
00:08:05.723  user	0m3.557s
00:08:05.723  sys	0m1.235s
00:08:05.723   00:38:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:05.723   00:38:54	-- common/autotest_common.sh@10 -- # set +x
00:08:05.723  ************************************
00:08:05.723  END TEST locking_app_on_locked_coremask
00:08:05.723  ************************************
00:08:05.723   00:38:54	-- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:08:05.723   00:38:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:05.723   00:38:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:05.723   00:38:54	-- common/autotest_common.sh@10 -- # set +x
00:08:05.723  ************************************
00:08:05.723  START TEST locking_overlapped_coremask
00:08:05.724  ************************************
00:08:05.724   00:38:54	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask
00:08:05.724   00:38:54	-- event/cpu_locks.sh@132 -- # spdk_tgt_pid=948787
00:08:05.724   00:38:54	-- event/cpu_locks.sh@133 -- # waitforlisten 948787 /var/tmp/spdk.sock
00:08:05.724   00:38:54	-- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:08:05.724   00:38:54	-- common/autotest_common.sh@829 -- # '[' -z 948787 ']'
00:08:05.724   00:38:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:05.724   00:38:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:05.724   00:38:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:05.724  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:05.724   00:38:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:05.724   00:38:54	-- common/autotest_common.sh@10 -- # set +x
00:08:05.724  [2024-12-17 00:38:54.829567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:05.724  [2024-12-17 00:38:54.829647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948787 ]
00:08:05.724  EAL: No free 2048 kB hugepages reported on node 1
00:08:05.724  [2024-12-17 00:38:54.938822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:05.982  [2024-12-17 00:38:54.992182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:05.982  [2024-12-17 00:38:54.992379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:05.982  [2024-12-17 00:38:54.992444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:05.982  [2024-12-17 00:38:54.992450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:05.982  [2024-12-17 00:38:55.156011] 'OCF_Core' volume operations registered
00:08:05.982  [2024-12-17 00:38:55.158258] 'OCF_Cache' volume operations registered
00:08:05.982  [2024-12-17 00:38:55.160946] 'OCF Composite' volume operations registered
00:08:05.982  [2024-12-17 00:38:55.163219] 'SPDK_block_device' volume operations registered
00:08:06.551   00:38:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:06.551   00:38:55	-- common/autotest_common.sh@862 -- # return 0
00:08:06.551   00:38:55	-- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:08:06.551   00:38:55	-- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=948969
00:08:06.551   00:38:55	-- event/cpu_locks.sh@137 -- # NOT waitforlisten 948969 /var/tmp/spdk2.sock
00:08:06.551   00:38:55	-- common/autotest_common.sh@650 -- # local es=0
00:08:06.551   00:38:55	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 948969 /var/tmp/spdk2.sock
00:08:06.551   00:38:55	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:08:06.551   00:38:55	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:06.551    00:38:55	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:08:06.551   00:38:55	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:06.551   00:38:55	-- common/autotest_common.sh@653 -- # waitforlisten 948969 /var/tmp/spdk2.sock
00:08:06.551   00:38:55	-- common/autotest_common.sh@829 -- # '[' -z 948969 ']'
00:08:06.551   00:38:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:06.551   00:38:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:06.551   00:38:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:06.551  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:06.551   00:38:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:06.551   00:38:55	-- common/autotest_common.sh@10 -- # set +x
00:08:06.551  [2024-12-17 00:38:55.763114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:06.551  [2024-12-17 00:38:55.763168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948969 ]
00:08:06.551  EAL: No free 2048 kB hugepages reported on node 1
00:08:06.810  [2024-12-17 00:38:55.862047] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 948787 has claimed it.
00:08:06.810  [2024-12-17 00:38:55.862089] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:07.379  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (948969) - No such process
00:08:07.379  ERROR: process (pid: 948969) is no longer running
00:08:07.379   00:38:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:07.379   00:38:56	-- common/autotest_common.sh@862 -- # return 1
00:08:07.379   00:38:56	-- common/autotest_common.sh@653 -- # es=1
00:08:07.379   00:38:56	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:07.379   00:38:56	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:07.379   00:38:56	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:07.379   00:38:56	-- event/cpu_locks.sh@139 -- # check_remaining_locks
00:08:07.379   00:38:56	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:07.379   00:38:56	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:07.379   00:38:56	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:07.379   00:38:56	-- event/cpu_locks.sh@141 -- # killprocess 948787
00:08:07.379   00:38:56	-- common/autotest_common.sh@936 -- # '[' -z 948787 ']'
00:08:07.379   00:38:56	-- common/autotest_common.sh@940 -- # kill -0 948787
00:08:07.379    00:38:56	-- common/autotest_common.sh@941 -- # uname
00:08:07.379   00:38:56	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:07.379    00:38:56	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 948787
00:08:07.379   00:38:56	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:07.379   00:38:56	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:07.379   00:38:56	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 948787'
00:08:07.379  killing process with pid 948787
00:08:07.379   00:38:56	-- common/autotest_common.sh@955 -- # kill 948787
00:08:07.379   00:38:56	-- common/autotest_common.sh@960 -- # wait 948787
00:08:07.949  
00:08:07.949  real	0m2.264s
00:08:07.949  user	0m6.311s
00:08:07.949  sys	0m0.627s
00:08:07.949   00:38:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:07.949   00:38:57	-- common/autotest_common.sh@10 -- # set +x
00:08:07.949  ************************************
00:08:07.949  END TEST locking_overlapped_coremask
00:08:07.949  ************************************
00:08:07.949   00:38:57	-- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:08:07.949   00:38:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:07.949   00:38:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:07.949   00:38:57	-- common/autotest_common.sh@10 -- # set +x
00:08:07.949  ************************************
00:08:07.949  START TEST locking_overlapped_coremask_via_rpc
00:08:07.949  ************************************
00:08:07.949   00:38:57	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc
00:08:07.949   00:38:57	-- event/cpu_locks.sh@148 -- # spdk_tgt_pid=949175
00:08:07.949   00:38:57	-- event/cpu_locks.sh@149 -- # waitforlisten 949175 /var/tmp/spdk.sock
00:08:07.949   00:38:57	-- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:08:07.949   00:38:57	-- common/autotest_common.sh@829 -- # '[' -z 949175 ']'
00:08:07.949   00:38:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:07.949   00:38:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:07.949   00:38:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:07.949  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:07.949   00:38:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:07.949   00:38:57	-- common/autotest_common.sh@10 -- # set +x
00:08:07.949  [2024-12-17 00:38:57.146128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:07.949  [2024-12-17 00:38:57.146209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949175 ]
00:08:07.949  EAL: No free 2048 kB hugepages reported on node 1
00:08:08.209  [2024-12-17 00:38:57.255518] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:08.209  [2024-12-17 00:38:57.255562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:08.209  [2024-12-17 00:38:57.309325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:08.209  [2024-12-17 00:38:57.309542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:08.209  [2024-12-17 00:38:57.309633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:08.209  [2024-12-17 00:38:57.309637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:08.469  [2024-12-17 00:38:57.472565] 'OCF_Core' volume operations registered
00:08:08.469  [2024-12-17 00:38:57.474989] 'OCF_Cache' volume operations registered
00:08:08.469  [2024-12-17 00:38:57.477905] 'OCF Composite' volume operations registered
00:08:08.469  [2024-12-17 00:38:57.480338] 'SPDK_block_device' volume operations registered
00:08:09.038   00:38:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:09.038   00:38:58	-- common/autotest_common.sh@862 -- # return 0
00:08:09.038   00:38:58	-- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:08:09.038   00:38:58	-- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=949211
00:08:09.038   00:38:58	-- event/cpu_locks.sh@153 -- # waitforlisten 949211 /var/tmp/spdk2.sock
00:08:09.038   00:38:58	-- common/autotest_common.sh@829 -- # '[' -z 949211 ']'
00:08:09.038   00:38:58	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:09.038   00:38:58	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:09.038   00:38:58	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:09.038  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:09.038   00:38:58	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:09.038   00:38:58	-- common/autotest_common.sh@10 -- # set +x
00:08:09.038  [2024-12-17 00:38:58.156595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:09.038  [2024-12-17 00:38:58.156669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949211 ]
00:08:09.038  EAL: No free 2048 kB hugepages reported on node 1
00:08:09.038  [2024-12-17 00:38:58.272253] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:09.038  [2024-12-17 00:38:58.272284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:09.298  [2024-12-17 00:38:58.360222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:09.298  [2024-12-17 00:38:58.360532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:08:09.298  [2024-12-17 00:38:58.360622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:09.298  [2024-12-17 00:38:58.360625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:08:09.557  [2024-12-17 00:38:58.657084] 'OCF_Core' volume operations registered
00:08:09.557  [2024-12-17 00:38:58.663266] 'OCF_Cache' volume operations registered
00:08:09.557  [2024-12-17 00:38:58.665882] 'OCF Composite' volume operations registered
00:08:09.557  [2024-12-17 00:38:58.672066] 'SPDK_block_device' volume operations registered
00:08:10.124   00:38:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:10.125   00:38:59	-- common/autotest_common.sh@862 -- # return 0
00:08:10.125   00:38:59	-- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:08:10.125   00:38:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:10.125   00:38:59	-- common/autotest_common.sh@10 -- # set +x
00:08:10.125   00:38:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:10.125   00:38:59	-- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:10.125   00:38:59	-- common/autotest_common.sh@650 -- # local es=0
00:08:10.125   00:38:59	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:10.125   00:38:59	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:08:10.125   00:38:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:10.125    00:38:59	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:08:10.125   00:38:59	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:10.125   00:38:59	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:10.125   00:38:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:10.125   00:38:59	-- common/autotest_common.sh@10 -- # set +x
00:08:10.125  [2024-12-17 00:38:59.141962] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 949175 has claimed it.
00:08:10.125  request:
00:08:10.125  {
00:08:10.125  "method": "framework_enable_cpumask_locks",
00:08:10.125  "req_id": 1
00:08:10.125  }
00:08:10.125  Got JSON-RPC error response
00:08:10.125  response:
00:08:10.125  {
00:08:10.125  "code": -32603,
00:08:10.125  "message": "Failed to claim CPU core: 2"
00:08:10.125  }
00:08:10.125   00:38:59	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:08:10.125   00:38:59	-- common/autotest_common.sh@653 -- # es=1
00:08:10.125   00:38:59	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:10.125   00:38:59	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:10.125   00:38:59	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:10.125   00:38:59	-- event/cpu_locks.sh@158 -- # waitforlisten 949175 /var/tmp/spdk.sock
00:08:10.125   00:38:59	-- common/autotest_common.sh@829 -- # '[' -z 949175 ']'
00:08:10.125   00:38:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:10.125   00:38:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:10.125   00:38:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:10.125  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:10.125   00:38:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:10.125   00:38:59	-- common/autotest_common.sh@10 -- # set +x
00:08:10.384   00:38:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:10.384   00:38:59	-- common/autotest_common.sh@862 -- # return 0
00:08:10.384   00:38:59	-- event/cpu_locks.sh@159 -- # waitforlisten 949211 /var/tmp/spdk2.sock
00:08:10.384   00:38:59	-- common/autotest_common.sh@829 -- # '[' -z 949211 ']'
00:08:10.384   00:38:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:10.384   00:38:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:10.384   00:38:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:10.384  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:10.384   00:38:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:10.384   00:38:59	-- common/autotest_common.sh@10 -- # set +x
00:08:10.643   00:38:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:10.643   00:38:59	-- common/autotest_common.sh@862 -- # return 0
00:08:10.643   00:38:59	-- event/cpu_locks.sh@161 -- # check_remaining_locks
00:08:10.643   00:38:59	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:10.643   00:38:59	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:10.643   00:38:59	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:10.643  
00:08:10.643  real	0m2.593s
00:08:10.643  user	0m1.280s
00:08:10.643  sys	0m0.247s
00:08:10.643   00:38:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:10.643   00:38:59	-- common/autotest_common.sh@10 -- # set +x
00:08:10.643  ************************************
00:08:10.643  END TEST locking_overlapped_coremask_via_rpc
00:08:10.643  ************************************
00:08:10.643   00:38:59	-- event/cpu_locks.sh@174 -- # cleanup
00:08:10.643   00:38:59	-- event/cpu_locks.sh@15 -- # [[ -z 949175 ]]
00:08:10.643   00:38:59	-- event/cpu_locks.sh@15 -- # killprocess 949175
00:08:10.643   00:38:59	-- common/autotest_common.sh@936 -- # '[' -z 949175 ']'
00:08:10.643   00:38:59	-- common/autotest_common.sh@940 -- # kill -0 949175
00:08:10.643    00:38:59	-- common/autotest_common.sh@941 -- # uname
00:08:10.643   00:38:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:10.643    00:38:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 949175
00:08:10.643   00:38:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:10.643   00:38:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:10.643   00:38:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 949175'
00:08:10.643  killing process with pid 949175
00:08:10.643   00:38:59	-- common/autotest_common.sh@955 -- # kill 949175
00:08:10.643   00:38:59	-- common/autotest_common.sh@960 -- # wait 949175
00:08:11.212   00:39:00	-- event/cpu_locks.sh@16 -- # [[ -z 949211 ]]
00:08:11.212   00:39:00	-- event/cpu_locks.sh@16 -- # killprocess 949211
00:08:11.212   00:39:00	-- common/autotest_common.sh@936 -- # '[' -z 949211 ']'
00:08:11.212   00:39:00	-- common/autotest_common.sh@940 -- # kill -0 949211
00:08:11.212    00:39:00	-- common/autotest_common.sh@941 -- # uname
00:08:11.212   00:39:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:11.212    00:39:00	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 949211
00:08:11.212   00:39:00	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:08:11.212   00:39:00	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:08:11.212   00:39:00	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 949211'
00:08:11.212  killing process with pid 949211
00:08:11.212   00:39:00	-- common/autotest_common.sh@955 -- # kill 949211
00:08:11.212   00:39:00	-- common/autotest_common.sh@960 -- # wait 949211
00:08:11.781   00:39:00	-- event/cpu_locks.sh@18 -- # rm -f
00:08:11.781   00:39:00	-- event/cpu_locks.sh@1 -- # cleanup
00:08:11.781   00:39:00	-- event/cpu_locks.sh@15 -- # [[ -z 949175 ]]
00:08:11.781   00:39:00	-- event/cpu_locks.sh@15 -- # killprocess 949175
00:08:11.781   00:39:00	-- common/autotest_common.sh@936 -- # '[' -z 949175 ']'
00:08:11.781   00:39:00	-- common/autotest_common.sh@940 -- # kill -0 949175
00:08:11.781  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (949175) - No such process
00:08:11.781   00:39:00	-- common/autotest_common.sh@963 -- # echo 'Process with pid 949175 is not found'
00:08:11.781  Process with pid 949175 is not found
00:08:11.781   00:39:00	-- event/cpu_locks.sh@16 -- # [[ -z 949211 ]]
00:08:11.781   00:39:00	-- event/cpu_locks.sh@16 -- # killprocess 949211
00:08:11.781   00:39:00	-- common/autotest_common.sh@936 -- # '[' -z 949211 ']'
00:08:11.781   00:39:00	-- common/autotest_common.sh@940 -- # kill -0 949211
00:08:11.781  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (949211) - No such process
00:08:11.781   00:39:00	-- common/autotest_common.sh@963 -- # echo 'Process with pid 949211 is not found'
00:08:11.781  Process with pid 949211 is not found
00:08:11.781   00:39:00	-- event/cpu_locks.sh@18 -- # rm -f
00:08:11.781  
00:08:11.781  real	0m25.800s
00:08:11.781  user	0m42.096s
00:08:11.781  sys	0m9.714s
00:08:11.781   00:39:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:11.781   00:39:00	-- common/autotest_common.sh@10 -- # set +x
00:08:11.781  ************************************
00:08:11.781  END TEST cpu_locks
00:08:11.781  ************************************
00:08:11.781  
00:08:11.781  real	0m52.684s
00:08:11.781  user	1m35.259s
00:08:11.781  sys	0m14.590s
00:08:11.781   00:39:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:11.781   00:39:00	-- common/autotest_common.sh@10 -- # set +x
00:08:11.781  ************************************
00:08:11.781  END TEST event
00:08:11.781  ************************************
00:08:11.781   00:39:00	-- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh
00:08:11.781   00:39:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:11.781   00:39:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:11.781   00:39:00	-- common/autotest_common.sh@10 -- # set +x
00:08:11.781  ************************************
00:08:11.781  START TEST thread
00:08:11.781  ************************************
00:08:11.781   00:39:00	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh
00:08:11.781  * Looking for test storage...
00:08:11.781  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread
00:08:11.781    00:39:01	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:11.781     00:39:01	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:11.781     00:39:01	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:12.040    00:39:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:12.040    00:39:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:12.040    00:39:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:12.040    00:39:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:12.040    00:39:01	-- scripts/common.sh@335 -- # IFS=.-:
00:08:12.040    00:39:01	-- scripts/common.sh@335 -- # read -ra ver1
00:08:12.040    00:39:01	-- scripts/common.sh@336 -- # IFS=.-:
00:08:12.040    00:39:01	-- scripts/common.sh@336 -- # read -ra ver2
00:08:12.040    00:39:01	-- scripts/common.sh@337 -- # local 'op=<'
00:08:12.040    00:39:01	-- scripts/common.sh@339 -- # ver1_l=2
00:08:12.040    00:39:01	-- scripts/common.sh@340 -- # ver2_l=1
00:08:12.040    00:39:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:12.040    00:39:01	-- scripts/common.sh@343 -- # case "$op" in
00:08:12.040    00:39:01	-- scripts/common.sh@344 -- # : 1
00:08:12.040    00:39:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:12.040    00:39:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:12.040     00:39:01	-- scripts/common.sh@364 -- # decimal 1
00:08:12.040     00:39:01	-- scripts/common.sh@352 -- # local d=1
00:08:12.040     00:39:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:12.040     00:39:01	-- scripts/common.sh@354 -- # echo 1
00:08:12.040    00:39:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:12.040     00:39:01	-- scripts/common.sh@365 -- # decimal 2
00:08:12.040     00:39:01	-- scripts/common.sh@352 -- # local d=2
00:08:12.040     00:39:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:12.040     00:39:01	-- scripts/common.sh@354 -- # echo 2
00:08:12.040    00:39:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:12.040    00:39:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:12.040    00:39:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:12.040    00:39:01	-- scripts/common.sh@367 -- # return 0
00:08:12.040    00:39:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:12.040    00:39:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:12.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:12.040  		--rc genhtml_branch_coverage=1
00:08:12.040  		--rc genhtml_function_coverage=1
00:08:12.040  		--rc genhtml_legend=1
00:08:12.040  		--rc geninfo_all_blocks=1
00:08:12.040  		--rc geninfo_unexecuted_blocks=1
00:08:12.040  		
00:08:12.040  		'
00:08:12.040    00:39:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:12.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:12.040  		--rc genhtml_branch_coverage=1
00:08:12.040  		--rc genhtml_function_coverage=1
00:08:12.040  		--rc genhtml_legend=1
00:08:12.040  		--rc geninfo_all_blocks=1
00:08:12.040  		--rc geninfo_unexecuted_blocks=1
00:08:12.040  		
00:08:12.040  		'
00:08:12.040    00:39:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:12.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:12.040  		--rc genhtml_branch_coverage=1
00:08:12.040  		--rc genhtml_function_coverage=1
00:08:12.040  		--rc genhtml_legend=1
00:08:12.040  		--rc geninfo_all_blocks=1
00:08:12.040  		--rc geninfo_unexecuted_blocks=1
00:08:12.040  		
00:08:12.040  		'
00:08:12.040    00:39:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:12.040  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:12.040  		--rc genhtml_branch_coverage=1
00:08:12.040  		--rc genhtml_function_coverage=1
00:08:12.040  		--rc genhtml_legend=1
00:08:12.040  		--rc geninfo_all_blocks=1
00:08:12.040  		--rc geninfo_unexecuted_blocks=1
00:08:12.040  		
00:08:12.040  		'
00:08:12.040   00:39:01	-- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:12.040   00:39:01	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:08:12.040   00:39:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:12.040   00:39:01	-- common/autotest_common.sh@10 -- # set +x
00:08:12.040  ************************************
00:08:12.040  START TEST thread_poller_perf
00:08:12.040  ************************************
00:08:12.040   00:39:01	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:12.040  [2024-12-17 00:39:01.130797] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:12.041  [2024-12-17 00:39:01.130902] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949891 ]
00:08:12.041  EAL: No free 2048 kB hugepages reported on node 1
00:08:12.041  [2024-12-17 00:39:01.225441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:12.041  [2024-12-17 00:39:01.274680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:12.041  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:08:13.420  
[2024-12-16T23:39:02.685Z]  ======================================
00:08:13.420  
[2024-12-16T23:39:02.685Z]  busy:2316600816 (cyc)
00:08:13.420  
[2024-12-16T23:39:02.685Z]  total_run_count: 259000
00:08:13.420  
[2024-12-16T23:39:02.685Z]  tsc_hz: 2300000000 (cyc)
00:08:13.420  
[2024-12-16T23:39:02.685Z]  ======================================
00:08:13.420  
[2024-12-16T23:39:02.685Z]  poller_cost: 8944 (cyc), 3888 (nsec)
00:08:13.420  
00:08:13.420  real	0m1.255s
00:08:13.420  user	0m1.138s
00:08:13.420  sys	0m0.110s
00:08:13.420   00:39:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:13.420   00:39:02	-- common/autotest_common.sh@10 -- # set +x
00:08:13.420  ************************************
00:08:13.420  END TEST thread_poller_perf
00:08:13.420  ************************************
00:08:13.420   00:39:02	-- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:13.420   00:39:02	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:08:13.420   00:39:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:13.420   00:39:02	-- common/autotest_common.sh@10 -- # set +x
00:08:13.420  ************************************
00:08:13.420  START TEST thread_poller_perf
00:08:13.420  ************************************
00:08:13.420   00:39:02	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:13.420  [2024-12-17 00:39:02.432507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:13.420  [2024-12-17 00:39:02.432595] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950128 ]
00:08:13.420  EAL: No free 2048 kB hugepages reported on node 1
00:08:13.420  [2024-12-17 00:39:02.535509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:13.420  [2024-12-17 00:39:02.585193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:13.420  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:08:14.801  
[2024-12-16T23:39:04.066Z]  ======================================
00:08:14.801  
[2024-12-16T23:39:04.066Z]  busy:2303067942 (cyc)
00:08:14.801  
[2024-12-16T23:39:04.066Z]  total_run_count: 3483000
00:08:14.801  
[2024-12-16T23:39:04.066Z]  tsc_hz: 2300000000 (cyc)
00:08:14.801  
[2024-12-16T23:39:04.066Z]  ======================================
00:08:14.801  
[2024-12-16T23:39:04.066Z]  poller_cost: 661 (cyc), 287 (nsec)
00:08:14.801  
00:08:14.801  real	0m1.253s
00:08:14.801  user	0m1.136s
00:08:14.801  sys	0m0.111s
00:08:14.801   00:39:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:14.801   00:39:03	-- common/autotest_common.sh@10 -- # set +x
00:08:14.801  ************************************
00:08:14.801  END TEST thread_poller_perf
00:08:14.801  ************************************
00:08:14.801   00:39:03	-- thread/thread.sh@17 -- # [[ y != \y ]]
00:08:14.801  
00:08:14.801  real	0m2.796s
00:08:14.801  user	0m2.394s
00:08:14.801  sys	0m0.420s
00:08:14.801   00:39:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:14.801   00:39:03	-- common/autotest_common.sh@10 -- # set +x
00:08:14.801  ************************************
00:08:14.801  END TEST thread
00:08:14.801  ************************************
00:08:14.801   00:39:03	-- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh
00:08:14.801   00:39:03	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:14.801   00:39:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:14.801   00:39:03	-- common/autotest_common.sh@10 -- # set +x
00:08:14.801  ************************************
00:08:14.801  START TEST accel
00:08:14.801  ************************************
00:08:14.801   00:39:03	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh
00:08:14.801  * Looking for test storage...
00:08:14.801  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel
00:08:14.801    00:39:03	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:14.801     00:39:03	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:14.801     00:39:03	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:14.801    00:39:03	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:14.801    00:39:03	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:14.801    00:39:03	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:14.801    00:39:03	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:14.801    00:39:03	-- scripts/common.sh@335 -- # IFS=.-:
00:08:14.801    00:39:03	-- scripts/common.sh@335 -- # read -ra ver1
00:08:14.801    00:39:03	-- scripts/common.sh@336 -- # IFS=.-:
00:08:14.801    00:39:03	-- scripts/common.sh@336 -- # read -ra ver2
00:08:14.801    00:39:03	-- scripts/common.sh@337 -- # local 'op=<'
00:08:14.801    00:39:03	-- scripts/common.sh@339 -- # ver1_l=2
00:08:14.801    00:39:03	-- scripts/common.sh@340 -- # ver2_l=1
00:08:14.801    00:39:03	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:14.801    00:39:03	-- scripts/common.sh@343 -- # case "$op" in
00:08:14.801    00:39:03	-- scripts/common.sh@344 -- # : 1
00:08:14.801    00:39:03	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:14.801    00:39:03	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:14.801     00:39:03	-- scripts/common.sh@364 -- # decimal 1
00:08:14.801     00:39:03	-- scripts/common.sh@352 -- # local d=1
00:08:14.801     00:39:03	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:14.801     00:39:03	-- scripts/common.sh@354 -- # echo 1
00:08:14.801    00:39:03	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:14.801     00:39:03	-- scripts/common.sh@365 -- # decimal 2
00:08:14.801     00:39:03	-- scripts/common.sh@352 -- # local d=2
00:08:14.801     00:39:03	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:14.801     00:39:03	-- scripts/common.sh@354 -- # echo 2
00:08:14.801    00:39:03	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:14.801    00:39:03	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:14.801    00:39:03	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:14.801    00:39:03	-- scripts/common.sh@367 -- # return 0
00:08:14.801    00:39:03	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:14.801    00:39:03	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:14.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:14.801  		--rc genhtml_branch_coverage=1
00:08:14.801  		--rc genhtml_function_coverage=1
00:08:14.801  		--rc genhtml_legend=1
00:08:14.801  		--rc geninfo_all_blocks=1
00:08:14.801  		--rc geninfo_unexecuted_blocks=1
00:08:14.801  		
00:08:14.801  		'
00:08:14.801    00:39:03	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:14.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:14.801  		--rc genhtml_branch_coverage=1
00:08:14.801  		--rc genhtml_function_coverage=1
00:08:14.801  		--rc genhtml_legend=1
00:08:14.801  		--rc geninfo_all_blocks=1
00:08:14.801  		--rc geninfo_unexecuted_blocks=1
00:08:14.801  		
00:08:14.801  		'
00:08:14.801    00:39:03	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:14.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:14.801  		--rc genhtml_branch_coverage=1
00:08:14.801  		--rc genhtml_function_coverage=1
00:08:14.801  		--rc genhtml_legend=1
00:08:14.801  		--rc geninfo_all_blocks=1
00:08:14.801  		--rc geninfo_unexecuted_blocks=1
00:08:14.801  		
00:08:14.801  		'
00:08:14.801    00:39:03	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:14.801  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:14.801  		--rc genhtml_branch_coverage=1
00:08:14.801  		--rc genhtml_function_coverage=1
00:08:14.801  		--rc genhtml_legend=1
00:08:14.801  		--rc geninfo_all_blocks=1
00:08:14.801  		--rc geninfo_unexecuted_blocks=1
00:08:14.801  		
00:08:14.801  		'
00:08:14.801   00:39:03	-- accel/accel.sh@73 -- # declare -A expected_opcs
00:08:14.801   00:39:03	-- accel/accel.sh@74 -- # get_expected_opcs
00:08:14.801   00:39:03	-- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:08:14.801   00:39:03	-- accel/accel.sh@59 -- # spdk_tgt_pid=950390
00:08:14.801   00:39:03	-- accel/accel.sh@60 -- # waitforlisten 950390
00:08:14.801   00:39:03	-- common/autotest_common.sh@829 -- # '[' -z 950390 ']'
00:08:14.801   00:39:03	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:14.801   00:39:03	-- accel/accel.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63
00:08:14.801   00:39:03	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:14.801    00:39:03	-- accel/accel.sh@58 -- # build_accel_config
00:08:14.801   00:39:03	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:14.801  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:14.801   00:39:03	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:14.801    00:39:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:14.801   00:39:03	-- common/autotest_common.sh@10 -- # set +x
00:08:14.801    00:39:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:14.801    00:39:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:14.801    00:39:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:14.801    00:39:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:14.801    00:39:03	-- accel/accel.sh@41 -- # local IFS=,
00:08:14.801    00:39:03	-- accel/accel.sh@42 -- # jq -r .
00:08:14.801  [2024-12-17 00:39:04.003467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:14.801  [2024-12-17 00:39:04.003533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950390 ]
00:08:14.801  EAL: No free 2048 kB hugepages reported on node 1
00:08:15.061  [2024-12-17 00:39:04.090430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:15.061  [2024-12-17 00:39:04.136691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:15.061  [2024-12-17 00:39:04.136852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:15.061  [2024-12-17 00:39:04.294809] 'OCF_Core' volume operations registered
00:08:15.061  [2024-12-17 00:39:04.296977] 'OCF_Cache' volume operations registered
00:08:15.061  [2024-12-17 00:39:04.299521] 'OCF Composite' volume operations registered
00:08:15.061  [2024-12-17 00:39:04.301688] 'SPDK_block_device' volume operations registered
00:08:16.001   00:39:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:16.001   00:39:04	-- common/autotest_common.sh@862 -- # return 0
00:08:16.001   00:39:04	-- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]"))
00:08:16.001    00:39:04	-- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments
00:08:16.001    00:39:04	-- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]'
00:08:16.001    00:39:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:16.001    00:39:04	-- common/autotest_common.sh@10 -- # set +x
00:08:16.001    00:39:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # IFS==
00:08:16.001   00:39:05	-- accel/accel.sh@64 -- # read -r opc module
00:08:16.001   00:39:05	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:08:16.001   00:39:05	-- accel/accel.sh@67 -- # killprocess 950390
00:08:16.001   00:39:05	-- common/autotest_common.sh@936 -- # '[' -z 950390 ']'
00:08:16.001   00:39:05	-- common/autotest_common.sh@940 -- # kill -0 950390
00:08:16.001    00:39:05	-- common/autotest_common.sh@941 -- # uname
00:08:16.001   00:39:05	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:16.001    00:39:05	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 950390
00:08:16.001   00:39:05	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:16.001   00:39:05	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:16.001   00:39:05	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 950390'
00:08:16.001  killing process with pid 950390
00:08:16.001   00:39:05	-- common/autotest_common.sh@955 -- # kill 950390
00:08:16.001   00:39:05	-- common/autotest_common.sh@960 -- # wait 950390
00:08:16.571   00:39:05	-- accel/accel.sh@68 -- # trap - ERR
00:08:16.571   00:39:05	-- accel/accel.sh@81 -- # run_test accel_help accel_perf -h
00:08:16.571   00:39:05	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:08:16.571   00:39:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:16.571   00:39:05	-- common/autotest_common.sh@10 -- # set +x
00:08:16.571   00:39:05	-- common/autotest_common.sh@1114 -- # accel_perf -h
00:08:16.571   00:39:05	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h
00:08:16.571    00:39:05	-- accel/accel.sh@12 -- # build_accel_config
00:08:16.571    00:39:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:16.571    00:39:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:16.571    00:39:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:16.571    00:39:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:16.571    00:39:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:16.571    00:39:05	-- accel/accel.sh@41 -- # local IFS=,
00:08:16.571    00:39:05	-- accel/accel.sh@42 -- # jq -r .
00:08:16.571   00:39:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:16.571   00:39:05	-- common/autotest_common.sh@10 -- # set +x
00:08:16.571   00:39:05	-- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress
00:08:16.571   00:39:05	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:16.571   00:39:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:16.571   00:39:05	-- common/autotest_common.sh@10 -- # set +x
00:08:16.571  ************************************
00:08:16.571  START TEST accel_missing_filename
00:08:16.571  ************************************
00:08:16.571   00:39:05	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress
00:08:16.571   00:39:05	-- common/autotest_common.sh@650 -- # local es=0
00:08:16.571   00:39:05	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress
00:08:16.571   00:39:05	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:08:16.571   00:39:05	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:16.571    00:39:05	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:08:16.571   00:39:05	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:16.571   00:39:05	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress
00:08:16.571   00:39:05	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress
00:08:16.571    00:39:05	-- accel/accel.sh@12 -- # build_accel_config
00:08:16.571    00:39:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:16.571    00:39:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:16.571    00:39:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:16.571    00:39:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:16.571    00:39:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:16.571    00:39:05	-- accel/accel.sh@41 -- # local IFS=,
00:08:16.571    00:39:05	-- accel/accel.sh@42 -- # jq -r .
00:08:16.571  [2024-12-17 00:39:05.703394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:16.571  [2024-12-17 00:39:05.703466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950909 ]
00:08:16.571  EAL: No free 2048 kB hugepages reported on node 1
00:08:16.571  [2024-12-17 00:39:05.809154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:16.889  [2024-12-17 00:39:05.858467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:16.889  [2024-12-17 00:39:05.909629] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:08:16.889  [2024-12-17 00:39:05.982447] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:08:16.889  A filename is required.
00:08:16.889   00:39:06	-- common/autotest_common.sh@653 -- # es=234
00:08:16.889   00:39:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:16.889   00:39:06	-- common/autotest_common.sh@662 -- # es=106
00:08:16.889   00:39:06	-- common/autotest_common.sh@663 -- # case "$es" in
00:08:16.889   00:39:06	-- common/autotest_common.sh@670 -- # es=1
00:08:16.889   00:39:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:16.889  
00:08:16.889  real	0m0.391s
00:08:16.889  user	0m0.265s
00:08:16.889  sys	0m0.167s
00:08:16.889   00:39:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:16.889   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:16.889  ************************************
00:08:16.889  END TEST accel_missing_filename
00:08:16.889  ************************************
00:08:16.889   00:39:06	-- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:16.889   00:39:06	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:08:16.889   00:39:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:16.889   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:16.889  ************************************
00:08:16.889  START TEST accel_compress_verify
00:08:16.889  ************************************
00:08:16.889   00:39:06	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:16.889   00:39:06	-- common/autotest_common.sh@650 -- # local es=0
00:08:16.889   00:39:06	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:16.889   00:39:06	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:08:16.889   00:39:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:16.889    00:39:06	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:08:16.889   00:39:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:16.889   00:39:06	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:16.889   00:39:06	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:16.889    00:39:06	-- accel/accel.sh@12 -- # build_accel_config
00:08:16.889    00:39:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:16.889    00:39:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:16.889    00:39:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:16.889    00:39:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:16.889    00:39:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:16.889    00:39:06	-- accel/accel.sh@41 -- # local IFS=,
00:08:16.889    00:39:06	-- accel/accel.sh@42 -- # jq -r .
00:08:17.233  [2024-12-17 00:39:06.146340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:17.233  [2024-12-17 00:39:06.146414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951008 ]
00:08:17.233  EAL: No free 2048 kB hugepages reported on node 1
00:08:17.233  [2024-12-17 00:39:06.252802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:17.233  [2024-12-17 00:39:06.299517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.233  [2024-12-17 00:39:06.344501] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:08:17.233  [2024-12-17 00:39:06.407003] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:08:17.588  
00:08:17.588  Compression does not support the verify option, aborting.
00:08:17.588   00:39:06	-- common/autotest_common.sh@653 -- # es=161
00:08:17.588   00:39:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:17.588   00:39:06	-- common/autotest_common.sh@662 -- # es=33
00:08:17.588   00:39:06	-- common/autotest_common.sh@663 -- # case "$es" in
00:08:17.588   00:39:06	-- common/autotest_common.sh@670 -- # es=1
00:08:17.588   00:39:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:17.588  
00:08:17.588  real	0m0.368s
00:08:17.588  user	0m0.250s
00:08:17.588  sys	0m0.158s
00:08:17.588   00:39:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:17.588   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.588  ************************************
00:08:17.588  END TEST accel_compress_verify
00:08:17.588  ************************************
00:08:17.588   00:39:06	-- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar
00:08:17.588   00:39:06	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:17.588   00:39:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:17.588   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.588  ************************************
00:08:17.588  START TEST accel_wrong_workload
00:08:17.588  ************************************
00:08:17.588   00:39:06	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar
00:08:17.588   00:39:06	-- common/autotest_common.sh@650 -- # local es=0
00:08:17.588   00:39:06	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar
00:08:17.588   00:39:06	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:08:17.589   00:39:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:17.589    00:39:06	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:08:17.589   00:39:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:17.589   00:39:06	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar
00:08:17.589   00:39:06	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar
00:08:17.589    00:39:06	-- accel/accel.sh@12 -- # build_accel_config
00:08:17.589    00:39:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:17.589    00:39:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:17.589    00:39:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:17.589    00:39:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:17.589    00:39:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:17.589    00:39:06	-- accel/accel.sh@41 -- # local IFS=,
00:08:17.589    00:39:06	-- accel/accel.sh@42 -- # jq -r .
00:08:17.589  Unsupported workload type: foobar
00:08:17.589  [2024-12-17 00:39:06.558005] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1
00:08:17.589  accel_perf options:
00:08:17.589  	[-h help message]
00:08:17.589  	[-q queue depth per core]
00:08:17.589  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:08:17.589  	[-T number of threads per core
00:08:17.589  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:08:17.589  	[-t time in seconds]
00:08:17.589  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:08:17.589  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:08:17.589  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:08:17.589  	[-l for compress/decompress workloads, name of uncompressed input file
00:08:17.589  	[-S for crc32c workload, use this seed value (default 0)
00:08:17.589  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:08:17.589  	[-f for fill workload, use this BYTE value (default 255)
00:08:17.589  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:08:17.589  	[-y verify result if this switch is on]
00:08:17.589  	[-a tasks to allocate per core (default: same value as -q)]
00:08:17.589  		Can be used to spread operations across a wider range of memory.
00:08:17.589   00:39:06	-- common/autotest_common.sh@653 -- # es=1
00:08:17.589   00:39:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:17.589   00:39:06	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:17.589   00:39:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:17.589  
00:08:17.589  real	0m0.038s
00:08:17.589  user	0m0.022s
00:08:17.589  sys	0m0.016s
00:08:17.589   00:39:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:17.589   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.589  ************************************
00:08:17.589  END TEST accel_wrong_workload
00:08:17.589  ************************************
00:08:17.589  Error: writing output failed: Broken pipe
00:08:17.589   00:39:06	-- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1
00:08:17.589   00:39:06	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:08:17.589   00:39:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:17.589   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.589  ************************************
00:08:17.589  START TEST accel_negative_buffers
00:08:17.589  ************************************
00:08:17.589   00:39:06	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1
00:08:17.589   00:39:06	-- common/autotest_common.sh@650 -- # local es=0
00:08:17.589   00:39:06	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1
00:08:17.589   00:39:06	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:08:17.589   00:39:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:17.589    00:39:06	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:08:17.589   00:39:06	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:17.589   00:39:06	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1
00:08:17.589   00:39:06	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1
00:08:17.589    00:39:06	-- accel/accel.sh@12 -- # build_accel_config
00:08:17.589    00:39:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:17.589    00:39:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:17.589    00:39:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:17.589    00:39:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:17.589    00:39:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:17.589    00:39:06	-- accel/accel.sh@41 -- # local IFS=,
00:08:17.589    00:39:06	-- accel/accel.sh@42 -- # jq -r .
00:08:17.589  -x option must be non-negative.
00:08:17.589  [2024-12-17 00:39:06.640966] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1
00:08:17.589  accel_perf options:
00:08:17.589  	[-h help message]
00:08:17.589  	[-q queue depth per core]
00:08:17.589  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:08:17.589  	[-T number of threads per core
00:08:17.589  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:08:17.589  	[-t time in seconds]
00:08:17.589  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:08:17.589  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:08:17.589  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:08:17.589  	[-l for compress/decompress workloads, name of uncompressed input file
00:08:17.589  	[-S for crc32c workload, use this seed value (default 0)
00:08:17.589  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:08:17.589  	[-f for fill workload, use this BYTE value (default 255)
00:08:17.589  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:08:17.589  	[-y verify result if this switch is on]
00:08:17.589  	[-a tasks to allocate per core (default: same value as -q)]
00:08:17.589  		Can be used to spread operations across a wider range of memory.
00:08:17.589   00:39:06	-- common/autotest_common.sh@653 -- # es=1
00:08:17.589   00:39:06	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:17.589   00:39:06	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:17.589   00:39:06	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:17.589  
00:08:17.589  real	0m0.037s
00:08:17.589  user	0m0.019s
00:08:17.589  sys	0m0.017s
00:08:17.589   00:39:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:17.589   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.589  ************************************
00:08:17.589  END TEST accel_negative_buffers
00:08:17.589  ************************************
00:08:17.589  Error: writing output failed: Broken pipe
00:08:17.589   00:39:06	-- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y
00:08:17.589   00:39:06	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:08:17.589   00:39:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:17.589   00:39:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.589  ************************************
00:08:17.589  START TEST accel_crc32c
00:08:17.589  ************************************
00:08:17.589   00:39:06	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y
00:08:17.589   00:39:06	-- accel/accel.sh@16 -- # local accel_opc
00:08:17.589   00:39:06	-- accel/accel.sh@17 -- # local accel_module
00:08:17.589    00:39:06	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:08:17.589    00:39:06	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:08:17.589     00:39:06	-- accel/accel.sh@12 -- # build_accel_config
00:08:17.589     00:39:06	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:17.589     00:39:06	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:17.589     00:39:06	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:17.589     00:39:06	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:17.589     00:39:06	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:17.589     00:39:06	-- accel/accel.sh@41 -- # local IFS=,
00:08:17.589     00:39:06	-- accel/accel.sh@42 -- # jq -r .
00:08:17.589  [2024-12-17 00:39:06.724312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:17.589  [2024-12-17 00:39:06.724380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951236 ]
00:08:17.589  EAL: No free 2048 kB hugepages reported on node 1
00:08:17.849  [2024-12-17 00:39:06.831928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:17.849  [2024-12-17 00:39:06.883254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:19.227   00:39:08	-- accel/accel.sh@18 -- # out='
00:08:19.227  SPDK Configuration:
00:08:19.227  Core mask:      0x1
00:08:19.227  
00:08:19.227  Accel Perf Configuration:
00:08:19.227  Workload Type:  crc32c
00:08:19.227  CRC-32C seed:   32
00:08:19.228  Transfer size:  4096 bytes
00:08:19.228  Vector count    1
00:08:19.228  Module:         software
00:08:19.228  Queue depth:    32
00:08:19.228  Allocate depth: 32
00:08:19.228  # threads/core: 1
00:08:19.228  Run time:       1 seconds
00:08:19.228  Verify:         Yes
00:08:19.228  
00:08:19.228  Running for 1 seconds...
00:08:19.228  
00:08:19.228  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:19.228  ------------------------------------------------------------------------------------
00:08:19.228  0,0                      373760/s       1460 MiB/s                0                0
00:08:19.228  ====================================================================================
00:08:19.228  Total                    373760/s       1460 MiB/s                0                0'
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228    00:39:08	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:08:19.228    00:39:08	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:08:19.228     00:39:08	-- accel/accel.sh@12 -- # build_accel_config
00:08:19.228     00:39:08	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:19.228     00:39:08	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:19.228     00:39:08	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:19.228     00:39:08	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:19.228     00:39:08	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:19.228     00:39:08	-- accel/accel.sh@41 -- # local IFS=,
00:08:19.228     00:39:08	-- accel/accel.sh@42 -- # jq -r .
00:08:19.228  [2024-12-17 00:39:08.117635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:19.228  [2024-12-17 00:39:08.117704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951431 ]
00:08:19.228  EAL: No free 2048 kB hugepages reported on node 1
00:08:19.228  [2024-12-17 00:39:08.223321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:19.228  [2024-12-17 00:39:08.273717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=0x1
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=crc32c
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=32
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=software
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@23 -- # accel_module=software
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=32
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=32
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=1
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=Yes
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:19.228   00:39:08	-- accel/accel.sh@21 -- # val=
00:08:19.228   00:39:08	-- accel/accel.sh@22 -- # case "$var" in
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # IFS=:
00:08:19.228   00:39:08	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@21 -- # val=
00:08:20.608   00:39:09	-- accel/accel.sh@22 -- # case "$var" in
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # IFS=:
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@21 -- # val=
00:08:20.608   00:39:09	-- accel/accel.sh@22 -- # case "$var" in
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # IFS=:
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@21 -- # val=
00:08:20.608   00:39:09	-- accel/accel.sh@22 -- # case "$var" in
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # IFS=:
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@21 -- # val=
00:08:20.608   00:39:09	-- accel/accel.sh@22 -- # case "$var" in
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # IFS=:
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@21 -- # val=
00:08:20.608   00:39:09	-- accel/accel.sh@22 -- # case "$var" in
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # IFS=:
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@21 -- # val=
00:08:20.608   00:39:09	-- accel/accel.sh@22 -- # case "$var" in
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # IFS=:
00:08:20.608   00:39:09	-- accel/accel.sh@20 -- # read -r var val
00:08:20.608   00:39:09	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:20.608   00:39:09	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:08:20.608   00:39:09	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:20.608  
00:08:20.608  real	0m2.777s
00:08:20.608  user	0m2.434s
00:08:20.608  sys	0m0.349s
00:08:20.608   00:39:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:20.608   00:39:09	-- common/autotest_common.sh@10 -- # set +x
00:08:20.608  ************************************
00:08:20.608  END TEST accel_crc32c
00:08:20.608  ************************************
00:08:20.608   00:39:09	-- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2
00:08:20.608   00:39:09	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:08:20.608   00:39:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:20.608   00:39:09	-- common/autotest_common.sh@10 -- # set +x
00:08:20.608  ************************************
00:08:20.608  START TEST accel_crc32c_C2
00:08:20.608  ************************************
00:08:20.608   00:39:09	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2
00:08:20.608   00:39:09	-- accel/accel.sh@16 -- # local accel_opc
00:08:20.608   00:39:09	-- accel/accel.sh@17 -- # local accel_module
00:08:20.608    00:39:09	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2
00:08:20.608    00:39:09	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:08:20.608     00:39:09	-- accel/accel.sh@12 -- # build_accel_config
00:08:20.608     00:39:09	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:20.608     00:39:09	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:20.608     00:39:09	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:20.608     00:39:09	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:20.608     00:39:09	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:20.608     00:39:09	-- accel/accel.sh@41 -- # local IFS=,
00:08:20.608     00:39:09	-- accel/accel.sh@42 -- # jq -r .
00:08:20.608  [2024-12-17 00:39:09.547981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:20.608  [2024-12-17 00:39:09.548055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951626 ]
00:08:20.608  EAL: No free 2048 kB hugepages reported on node 1
00:08:20.608  [2024-12-17 00:39:09.654328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:20.608  [2024-12-17 00:39:09.705387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:21.986   00:39:10	-- accel/accel.sh@18 -- # out='
00:08:21.986  SPDK Configuration:
00:08:21.986  Core mask:      0x1
00:08:21.986  
00:08:21.986  Accel Perf Configuration:
00:08:21.986  Workload Type:  crc32c
00:08:21.986  CRC-32C seed:   0
00:08:21.986  Transfer size:  4096 bytes
00:08:21.986  Vector count    2
00:08:21.986  Module:         software
00:08:21.986  Queue depth:    32
00:08:21.986  Allocate depth: 32
00:08:21.986  # threads/core: 1
00:08:21.986  Run time:       1 seconds
00:08:21.986  Verify:         Yes
00:08:21.986  
00:08:21.986  Running for 1 seconds...
00:08:21.986  
00:08:21.986  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:21.986  ------------------------------------------------------------------------------------
00:08:21.986  0,0                      294528/s       2301 MiB/s                0                0
00:08:21.986  ====================================================================================
00:08:21.986  Total                    294528/s       1150 MiB/s                0                0'
00:08:21.986   00:39:10	-- accel/accel.sh@20 -- # IFS=:
00:08:21.986   00:39:10	-- accel/accel.sh@20 -- # read -r var val
00:08:21.986    00:39:10	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2
00:08:21.986    00:39:10	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:08:21.986     00:39:10	-- accel/accel.sh@12 -- # build_accel_config
00:08:21.986     00:39:10	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:21.986     00:39:10	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:21.986     00:39:10	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:21.986     00:39:10	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:21.986     00:39:10	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:21.986     00:39:10	-- accel/accel.sh@41 -- # local IFS=,
00:08:21.986     00:39:10	-- accel/accel.sh@42 -- # jq -r .
00:08:21.986  [2024-12-17 00:39:10.929259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:21.987  [2024-12-17 00:39:10.929327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951805 ]
00:08:21.987  EAL: No free 2048 kB hugepages reported on node 1
00:08:21.987  [2024-12-17 00:39:11.034546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:21.987  [2024-12-17 00:39:11.084129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=0x1
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=crc32c
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=0
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=software
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@23 -- # accel_module=software
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=32
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=32
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=1
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=Yes
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:21.987   00:39:11	-- accel/accel.sh@21 -- # val=
00:08:21.987   00:39:11	-- accel/accel.sh@22 -- # case "$var" in
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # IFS=:
00:08:21.987   00:39:11	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@21 -- # val=
00:08:23.366   00:39:12	-- accel/accel.sh@22 -- # case "$var" in
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # IFS=:
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@21 -- # val=
00:08:23.366   00:39:12	-- accel/accel.sh@22 -- # case "$var" in
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # IFS=:
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@21 -- # val=
00:08:23.366   00:39:12	-- accel/accel.sh@22 -- # case "$var" in
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # IFS=:
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@21 -- # val=
00:08:23.366   00:39:12	-- accel/accel.sh@22 -- # case "$var" in
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # IFS=:
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@21 -- # val=
00:08:23.366   00:39:12	-- accel/accel.sh@22 -- # case "$var" in
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # IFS=:
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@21 -- # val=
00:08:23.366   00:39:12	-- accel/accel.sh@22 -- # case "$var" in
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # IFS=:
00:08:23.366   00:39:12	-- accel/accel.sh@20 -- # read -r var val
00:08:23.366   00:39:12	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:23.366   00:39:12	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:08:23.366   00:39:12	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:23.366  
00:08:23.366  real	0m2.778s
00:08:23.366  user	0m2.442s
00:08:23.366  sys	0m0.340s
00:08:23.366   00:39:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:23.366   00:39:12	-- common/autotest_common.sh@10 -- # set +x
00:08:23.366  ************************************
00:08:23.366  END TEST accel_crc32c_C2
00:08:23.366  ************************************
00:08:23.366   00:39:12	-- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y
00:08:23.366   00:39:12	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:23.366   00:39:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:23.366   00:39:12	-- common/autotest_common.sh@10 -- # set +x
00:08:23.366  ************************************
00:08:23.366  START TEST accel_copy
00:08:23.366  ************************************
00:08:23.366   00:39:12	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y
00:08:23.366   00:39:12	-- accel/accel.sh@16 -- # local accel_opc
00:08:23.366   00:39:12	-- accel/accel.sh@17 -- # local accel_module
00:08:23.366    00:39:12	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y
00:08:23.366    00:39:12	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:08:23.366     00:39:12	-- accel/accel.sh@12 -- # build_accel_config
00:08:23.366     00:39:12	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:23.366     00:39:12	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:23.366     00:39:12	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:23.366     00:39:12	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:23.366     00:39:12	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:23.366     00:39:12	-- accel/accel.sh@41 -- # local IFS=,
00:08:23.366     00:39:12	-- accel/accel.sh@42 -- # jq -r .
00:08:23.366  [2024-12-17 00:39:12.367430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:23.366  [2024-12-17 00:39:12.367517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952007 ]
00:08:23.366  EAL: No free 2048 kB hugepages reported on node 1
00:08:23.366  [2024-12-17 00:39:12.473530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:23.366  [2024-12-17 00:39:12.523566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:24.745   00:39:13	-- accel/accel.sh@18 -- # out='
00:08:24.745  SPDK Configuration:
00:08:24.745  Core mask:      0x1
00:08:24.745  
00:08:24.745  Accel Perf Configuration:
00:08:24.745  Workload Type:  copy
00:08:24.745  Transfer size:  4096 bytes
00:08:24.745  Vector count    1
00:08:24.745  Module:         software
00:08:24.745  Queue depth:    32
00:08:24.745  Allocate depth: 32
00:08:24.745  # threads/core: 1
00:08:24.745  Run time:       1 seconds
00:08:24.745  Verify:         Yes
00:08:24.745  
00:08:24.745  Running for 1 seconds...
00:08:24.745  
00:08:24.745  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:24.745  ------------------------------------------------------------------------------------
00:08:24.745  0,0                      276896/s       1081 MiB/s                0                0
00:08:24.745  ====================================================================================
00:08:24.745  Total                    276896/s       1081 MiB/s                0                0'
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745    00:39:13	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y
00:08:24.745    00:39:13	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:08:24.745     00:39:13	-- accel/accel.sh@12 -- # build_accel_config
00:08:24.745     00:39:13	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:24.745     00:39:13	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:24.745     00:39:13	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:24.745     00:39:13	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:24.745     00:39:13	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:24.745     00:39:13	-- accel/accel.sh@41 -- # local IFS=,
00:08:24.745     00:39:13	-- accel/accel.sh@42 -- # jq -r .
00:08:24.745  [2024-12-17 00:39:13.762307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:24.745  [2024-12-17 00:39:13.762375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952186 ]
00:08:24.745  EAL: No free 2048 kB hugepages reported on node 1
00:08:24.745  [2024-12-17 00:39:13.867345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:24.745  [2024-12-17 00:39:13.916810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=0x1
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=copy
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@24 -- # accel_opc=copy
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.745   00:39:13	-- accel/accel.sh@21 -- # val=software
00:08:24.745   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.745   00:39:13	-- accel/accel.sh@23 -- # accel_module=software
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.745   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val=32
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val=32
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val=1
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val=Yes
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:24.746   00:39:13	-- accel/accel.sh@21 -- # val=
00:08:24.746   00:39:13	-- accel/accel.sh@22 -- # case "$var" in
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # IFS=:
00:08:24.746   00:39:13	-- accel/accel.sh@20 -- # read -r var val
00:08:26.122   00:39:15	-- accel/accel.sh@21 -- # val=
00:08:26.122   00:39:15	-- accel/accel.sh@22 -- # case "$var" in
00:08:26.122   00:39:15	-- accel/accel.sh@20 -- # IFS=:
00:08:26.122   00:39:15	-- accel/accel.sh@20 -- # read -r var val
00:08:26.123   00:39:15	-- accel/accel.sh@21 -- # val=
00:08:26.123   00:39:15	-- accel/accel.sh@22 -- # case "$var" in
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # IFS=:
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # read -r var val
00:08:26.123   00:39:15	-- accel/accel.sh@21 -- # val=
00:08:26.123   00:39:15	-- accel/accel.sh@22 -- # case "$var" in
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # IFS=:
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # read -r var val
00:08:26.123   00:39:15	-- accel/accel.sh@21 -- # val=
00:08:26.123   00:39:15	-- accel/accel.sh@22 -- # case "$var" in
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # IFS=:
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # read -r var val
00:08:26.123   00:39:15	-- accel/accel.sh@21 -- # val=
00:08:26.123   00:39:15	-- accel/accel.sh@22 -- # case "$var" in
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # IFS=:
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # read -r var val
00:08:26.123   00:39:15	-- accel/accel.sh@21 -- # val=
00:08:26.123   00:39:15	-- accel/accel.sh@22 -- # case "$var" in
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # IFS=:
00:08:26.123   00:39:15	-- accel/accel.sh@20 -- # read -r var val
00:08:26.123   00:39:15	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:26.123   00:39:15	-- accel/accel.sh@28 -- # [[ -n copy ]]
00:08:26.123   00:39:15	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:26.123  
00:08:26.123  real	0m2.791s
00:08:26.123  user	0m2.446s
00:08:26.123  sys	0m0.350s
00:08:26.123   00:39:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:26.123   00:39:15	-- common/autotest_common.sh@10 -- # set +x
00:08:26.123  ************************************
00:08:26.123  END TEST accel_copy
00:08:26.123  ************************************
00:08:26.123   00:39:15	-- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:08:26.123   00:39:15	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:08:26.123   00:39:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:26.123   00:39:15	-- common/autotest_common.sh@10 -- # set +x
00:08:26.123  ************************************
00:08:26.123  START TEST accel_fill
00:08:26.123  ************************************
00:08:26.123   00:39:15	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:08:26.123   00:39:15	-- accel/accel.sh@16 -- # local accel_opc
00:08:26.123   00:39:15	-- accel/accel.sh@17 -- # local accel_module
00:08:26.123    00:39:15	-- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:08:26.123    00:39:15	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:08:26.123     00:39:15	-- accel/accel.sh@12 -- # build_accel_config
00:08:26.123     00:39:15	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:26.123     00:39:15	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:26.123     00:39:15	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:26.123     00:39:15	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:26.123     00:39:15	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:26.123     00:39:15	-- accel/accel.sh@41 -- # local IFS=,
00:08:26.123     00:39:15	-- accel/accel.sh@42 -- # jq -r .
00:08:26.123  [2024-12-17 00:39:15.189038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:26.123  [2024-12-17 00:39:15.189106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952387 ]
00:08:26.123  EAL: No free 2048 kB hugepages reported on node 1
00:08:26.123  [2024-12-17 00:39:15.293453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:26.123  [2024-12-17 00:39:15.343300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:27.500   00:39:16	-- accel/accel.sh@18 -- # out='
00:08:27.500  SPDK Configuration:
00:08:27.500  Core mask:      0x1
00:08:27.500  
00:08:27.500  Accel Perf Configuration:
00:08:27.500  Workload Type:  fill
00:08:27.500  Fill pattern:   0x80
00:08:27.500  Transfer size:  4096 bytes
00:08:27.500  Vector count    1
00:08:27.500  Module:         software
00:08:27.500  Queue depth:    64
00:08:27.500  Allocate depth: 64
00:08:27.500  # threads/core: 1
00:08:27.500  Run time:       1 seconds
00:08:27.500  Verify:         Yes
00:08:27.500  
00:08:27.500  Running for 1 seconds...
00:08:27.500  
00:08:27.500  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:27.500  ------------------------------------------------------------------------------------
00:08:27.500  0,0                      428288/s       1673 MiB/s                0                0
00:08:27.500  ====================================================================================
00:08:27.500  Total                    428288/s       1673 MiB/s                0                0'
00:08:27.500   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.500   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.500    00:39:16	-- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:08:27.500    00:39:16	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:08:27.500     00:39:16	-- accel/accel.sh@12 -- # build_accel_config
00:08:27.500     00:39:16	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:27.500     00:39:16	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:27.500     00:39:16	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:27.500     00:39:16	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:27.500     00:39:16	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:27.500     00:39:16	-- accel/accel.sh@41 -- # local IFS=,
00:08:27.500     00:39:16	-- accel/accel.sh@42 -- # jq -r .
00:08:27.500  [2024-12-17 00:39:16.577927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:27.500  [2024-12-17 00:39:16.577995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952565 ]
00:08:27.500  EAL: No free 2048 kB hugepages reported on node 1
00:08:27.500  [2024-12-17 00:39:16.683559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:27.500  [2024-12-17 00:39:16.732773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=0x1
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=fill
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@24 -- # accel_opc=fill
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=0x80
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=software
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@23 -- # accel_module=software
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=64
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=64
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=1
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=Yes
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:27.759   00:39:16	-- accel/accel.sh@21 -- # val=
00:08:27.759   00:39:16	-- accel/accel.sh@22 -- # case "$var" in
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # IFS=:
00:08:27.759   00:39:16	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@21 -- # val=
00:08:28.696   00:39:17	-- accel/accel.sh@22 -- # case "$var" in
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # IFS=:
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@21 -- # val=
00:08:28.696   00:39:17	-- accel/accel.sh@22 -- # case "$var" in
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # IFS=:
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@21 -- # val=
00:08:28.696   00:39:17	-- accel/accel.sh@22 -- # case "$var" in
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # IFS=:
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@21 -- # val=
00:08:28.696   00:39:17	-- accel/accel.sh@22 -- # case "$var" in
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # IFS=:
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@21 -- # val=
00:08:28.696   00:39:17	-- accel/accel.sh@22 -- # case "$var" in
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # IFS=:
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@21 -- # val=
00:08:28.696   00:39:17	-- accel/accel.sh@22 -- # case "$var" in
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # IFS=:
00:08:28.696   00:39:17	-- accel/accel.sh@20 -- # read -r var val
00:08:28.696   00:39:17	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:28.696   00:39:17	-- accel/accel.sh@28 -- # [[ -n fill ]]
00:08:28.696   00:39:17	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:28.696  
00:08:28.696  real	0m2.775s
00:08:28.696  user	0m2.440s
00:08:28.696  sys	0m0.339s
00:08:28.696   00:39:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:28.696   00:39:17	-- common/autotest_common.sh@10 -- # set +x
00:08:28.696  ************************************
00:08:28.696  END TEST accel_fill
00:08:28.696  ************************************
00:08:28.955   00:39:17	-- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y
00:08:28.955   00:39:17	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:28.955   00:39:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:28.955   00:39:17	-- common/autotest_common.sh@10 -- # set +x
00:08:28.955  ************************************
00:08:28.955  START TEST accel_copy_crc32c
00:08:28.955  ************************************
00:08:28.955   00:39:17	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y
00:08:28.955   00:39:17	-- accel/accel.sh@16 -- # local accel_opc
00:08:28.955   00:39:17	-- accel/accel.sh@17 -- # local accel_module
00:08:28.955    00:39:17	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y
00:08:28.955    00:39:17	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:08:28.955     00:39:17	-- accel/accel.sh@12 -- # build_accel_config
00:08:28.955     00:39:17	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:28.955     00:39:18	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:28.955     00:39:18	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:28.955     00:39:18	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:28.955     00:39:18	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:28.955     00:39:18	-- accel/accel.sh@41 -- # local IFS=,
00:08:28.955     00:39:18	-- accel/accel.sh@42 -- # jq -r .
00:08:28.955  [2024-12-17 00:39:18.025406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:28.955  [2024-12-17 00:39:18.025499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952763 ]
00:08:28.955  EAL: No free 2048 kB hugepages reported on node 1
00:08:28.955  [2024-12-17 00:39:18.119577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:28.955  [2024-12-17 00:39:18.174370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:30.334   00:39:19	-- accel/accel.sh@18 -- # out='
00:08:30.334  SPDK Configuration:
00:08:30.334  Core mask:      0x1
00:08:30.334  
00:08:30.334  Accel Perf Configuration:
00:08:30.334  Workload Type:  copy_crc32c
00:08:30.334  CRC-32C seed:   0
00:08:30.334  Vector size:    4096 bytes
00:08:30.334  Transfer size:  4096 bytes
00:08:30.334  Vector count    1
00:08:30.334  Module:         software
00:08:30.334  Queue depth:    32
00:08:30.334  Allocate depth: 32
00:08:30.334  # threads/core: 1
00:08:30.334  Run time:       1 seconds
00:08:30.334  Verify:         Yes
00:08:30.334  
00:08:30.334  Running for 1 seconds...
00:08:30.334  
00:08:30.334  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:30.334  ------------------------------------------------------------------------------------
00:08:30.334  0,0                      212992/s        832 MiB/s                0                0
00:08:30.334  ====================================================================================
00:08:30.334  Total                    212992/s        832 MiB/s                0                0'
00:08:30.334   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.334   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.334    00:39:19	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y
00:08:30.334    00:39:19	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:08:30.334     00:39:19	-- accel/accel.sh@12 -- # build_accel_config
00:08:30.334     00:39:19	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:30.334     00:39:19	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:30.334     00:39:19	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:30.334     00:39:19	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:30.334     00:39:19	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:30.334     00:39:19	-- accel/accel.sh@41 -- # local IFS=,
00:08:30.334     00:39:19	-- accel/accel.sh@42 -- # jq -r .
00:08:30.334  [2024-12-17 00:39:19.413091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:30.334  [2024-12-17 00:39:19.413159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952947 ]
00:08:30.334  EAL: No free 2048 kB hugepages reported on node 1
00:08:30.334  [2024-12-17 00:39:19.519341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:30.334  [2024-12-17 00:39:19.567087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:30.593   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.593   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.593   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.593   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.593   00:39:19	-- accel/accel.sh@21 -- # val=0x1
00:08:30.593   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.593   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.593   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.593   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=copy_crc32c
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=0
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=software
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@23 -- # accel_module=software
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=32
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=32
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=1
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=Yes
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:30.594   00:39:19	-- accel/accel.sh@21 -- # val=
00:08:30.594   00:39:19	-- accel/accel.sh@22 -- # case "$var" in
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # IFS=:
00:08:30.594   00:39:19	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@21 -- # val=
00:08:31.531   00:39:20	-- accel/accel.sh@22 -- # case "$var" in
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # IFS=:
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@21 -- # val=
00:08:31.531   00:39:20	-- accel/accel.sh@22 -- # case "$var" in
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # IFS=:
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@21 -- # val=
00:08:31.531   00:39:20	-- accel/accel.sh@22 -- # case "$var" in
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # IFS=:
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@21 -- # val=
00:08:31.531   00:39:20	-- accel/accel.sh@22 -- # case "$var" in
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # IFS=:
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@21 -- # val=
00:08:31.531   00:39:20	-- accel/accel.sh@22 -- # case "$var" in
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # IFS=:
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@21 -- # val=
00:08:31.531   00:39:20	-- accel/accel.sh@22 -- # case "$var" in
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # IFS=:
00:08:31.531   00:39:20	-- accel/accel.sh@20 -- # read -r var val
00:08:31.531   00:39:20	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:31.531   00:39:20	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:08:31.531   00:39:20	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:31.531  
00:08:31.531  real	0m2.779s
00:08:31.531  user	0m2.450s
00:08:31.531  sys	0m0.335s
00:08:31.531   00:39:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:31.531   00:39:20	-- common/autotest_common.sh@10 -- # set +x
00:08:31.531  ************************************
00:08:31.531  END TEST accel_copy_crc32c
00:08:31.531  ************************************
00:08:31.790   00:39:20	-- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2
00:08:31.790   00:39:20	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:08:31.790   00:39:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:31.790   00:39:20	-- common/autotest_common.sh@10 -- # set +x
00:08:31.790  ************************************
00:08:31.790  START TEST accel_copy_crc32c_C2
00:08:31.790  ************************************
00:08:31.790   00:39:20	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2
00:08:31.790   00:39:20	-- accel/accel.sh@16 -- # local accel_opc
00:08:31.790   00:39:20	-- accel/accel.sh@17 -- # local accel_module
00:08:31.790    00:39:20	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:08:31.790    00:39:20	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:08:31.790     00:39:20	-- accel/accel.sh@12 -- # build_accel_config
00:08:31.790     00:39:20	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:31.790     00:39:20	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:31.790     00:39:20	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:31.790     00:39:20	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:31.790     00:39:20	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:31.790     00:39:20	-- accel/accel.sh@41 -- # local IFS=,
00:08:31.790     00:39:20	-- accel/accel.sh@42 -- # jq -r .
00:08:31.790  [2024-12-17 00:39:20.849947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:31.790  [2024-12-17 00:39:20.850027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953146 ]
00:08:31.790  EAL: No free 2048 kB hugepages reported on node 1
00:08:31.790  [2024-12-17 00:39:20.953819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:31.790  [2024-12-17 00:39:21.000697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:33.169   00:39:22	-- accel/accel.sh@18 -- # out='
00:08:33.169  SPDK Configuration:
00:08:33.169  Core mask:      0x1
00:08:33.169  
00:08:33.169  Accel Perf Configuration:
00:08:33.169  Workload Type:  copy_crc32c
00:08:33.169  CRC-32C seed:   0
00:08:33.169  Vector size:    4096 bytes
00:08:33.169  Transfer size:  8192 bytes
00:08:33.169  Vector count    2
00:08:33.169  Module:         software
00:08:33.169  Queue depth:    32
00:08:33.169  Allocate depth: 32
00:08:33.169  # threads/core: 1
00:08:33.169  Run time:       1 seconds
00:08:33.169  Verify:         Yes
00:08:33.169  
00:08:33.169  Running for 1 seconds...
00:08:33.169  
00:08:33.169  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:33.169  ------------------------------------------------------------------------------------
00:08:33.169  0,0                      153696/s       1200 MiB/s                0                0
00:08:33.169  ====================================================================================
00:08:33.169  Total                    153696/s        600 MiB/s                0                0'
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169    00:39:22	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:08:33.169    00:39:22	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:08:33.169     00:39:22	-- accel/accel.sh@12 -- # build_accel_config
00:08:33.169     00:39:22	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:33.169     00:39:22	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:33.169     00:39:22	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:33.169     00:39:22	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:33.169     00:39:22	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:33.169     00:39:22	-- accel/accel.sh@41 -- # local IFS=,
00:08:33.169     00:39:22	-- accel/accel.sh@42 -- # jq -r .
00:08:33.169  [2024-12-17 00:39:22.219549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:33.169  [2024-12-17 00:39:22.219618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953327 ]
00:08:33.169  EAL: No free 2048 kB hugepages reported on node 1
00:08:33.169  [2024-12-17 00:39:22.325688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:33.169  [2024-12-17 00:39:22.372377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.169   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.169   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=0x1
00:08:33.169   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.169   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.169   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=copy_crc32c
00:08:33.169   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.169   00:39:22	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.169   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.169   00:39:22	-- accel/accel.sh@21 -- # val=0
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val='8192 bytes'
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val=software
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@23 -- # accel_module=software
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val=32
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val=32
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.428   00:39:22	-- accel/accel.sh@21 -- # val=1
00:08:33.428   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.428   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.429   00:39:22	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:33.429   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.429   00:39:22	-- accel/accel.sh@21 -- # val=Yes
00:08:33.429   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.429   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.429   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:33.429   00:39:22	-- accel/accel.sh@21 -- # val=
00:08:33.429   00:39:22	-- accel/accel.sh@22 -- # case "$var" in
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # IFS=:
00:08:33.429   00:39:22	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@21 -- # val=
00:08:34.366   00:39:23	-- accel/accel.sh@22 -- # case "$var" in
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # IFS=:
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@21 -- # val=
00:08:34.366   00:39:23	-- accel/accel.sh@22 -- # case "$var" in
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # IFS=:
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@21 -- # val=
00:08:34.366   00:39:23	-- accel/accel.sh@22 -- # case "$var" in
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # IFS=:
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@21 -- # val=
00:08:34.366   00:39:23	-- accel/accel.sh@22 -- # case "$var" in
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # IFS=:
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@21 -- # val=
00:08:34.366   00:39:23	-- accel/accel.sh@22 -- # case "$var" in
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # IFS=:
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@21 -- # val=
00:08:34.366   00:39:23	-- accel/accel.sh@22 -- # case "$var" in
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # IFS=:
00:08:34.366   00:39:23	-- accel/accel.sh@20 -- # read -r var val
00:08:34.366   00:39:23	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:34.366   00:39:23	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:08:34.366   00:39:23	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:34.366  
00:08:34.366  real	0m2.746s
00:08:34.366  user	0m2.434s
00:08:34.366  sys	0m0.319s
00:08:34.366   00:39:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:34.366   00:39:23	-- common/autotest_common.sh@10 -- # set +x
00:08:34.366  ************************************
00:08:34.366  END TEST accel_copy_crc32c_C2
00:08:34.366  ************************************
00:08:34.366   00:39:23	-- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y
00:08:34.366   00:39:23	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:34.366   00:39:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:34.366   00:39:23	-- common/autotest_common.sh@10 -- # set +x
00:08:34.366  ************************************
00:08:34.366  START TEST accel_dualcast
00:08:34.366  ************************************
00:08:34.366   00:39:23	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y
00:08:34.366   00:39:23	-- accel/accel.sh@16 -- # local accel_opc
00:08:34.366   00:39:23	-- accel/accel.sh@17 -- # local accel_module
00:08:34.366    00:39:23	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y
00:08:34.366    00:39:23	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:08:34.366     00:39:23	-- accel/accel.sh@12 -- # build_accel_config
00:08:34.366     00:39:23	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:34.366     00:39:23	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:34.366     00:39:23	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:34.366     00:39:23	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:34.366     00:39:23	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:34.366     00:39:23	-- accel/accel.sh@41 -- # local IFS=,
00:08:34.366     00:39:23	-- accel/accel.sh@42 -- # jq -r .
00:08:34.366  [2024-12-17 00:39:23.621825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:34.366  [2024-12-17 00:39:23.621879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953529 ]
00:08:34.625  EAL: No free 2048 kB hugepages reported on node 1
00:08:34.625  [2024-12-17 00:39:23.712056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:34.625  [2024-12-17 00:39:23.761969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:36.003   00:39:24	-- accel/accel.sh@18 -- # out='
00:08:36.003  SPDK Configuration:
00:08:36.003  Core mask:      0x1
00:08:36.003  
00:08:36.003  Accel Perf Configuration:
00:08:36.003  Workload Type:  dualcast
00:08:36.003  Transfer size:  4096 bytes
00:08:36.003  Vector count    1
00:08:36.003  Module:         software
00:08:36.004  Queue depth:    32
00:08:36.004  Allocate depth: 32
00:08:36.004  # threads/core: 1
00:08:36.004  Run time:       1 seconds
00:08:36.004  Verify:         Yes
00:08:36.004  
00:08:36.004  Running for 1 seconds...
00:08:36.004  
00:08:36.004  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:36.004  ------------------------------------------------------------------------------------
00:08:36.004  0,0                      327712/s       1280 MiB/s                0                0
00:08:36.004  ====================================================================================
00:08:36.004  Total                    327712/s       1280 MiB/s                0                0'
00:08:36.004   00:39:24	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:24	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004    00:39:24	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y
00:08:36.004    00:39:24	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:08:36.004     00:39:24	-- accel/accel.sh@12 -- # build_accel_config
00:08:36.004     00:39:24	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:36.004     00:39:24	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:36.004     00:39:24	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:36.004     00:39:24	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:36.004     00:39:24	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:36.004     00:39:24	-- accel/accel.sh@41 -- # local IFS=,
00:08:36.004     00:39:24	-- accel/accel.sh@42 -- # jq -r .
00:08:36.004  [2024-12-17 00:39:24.984018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:36.004  [2024-12-17 00:39:24.984088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953707 ]
00:08:36.004  EAL: No free 2048 kB hugepages reported on node 1
00:08:36.004  [2024-12-17 00:39:25.091401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:36.004  [2024-12-17 00:39:25.141195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=0x1
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=dualcast
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@24 -- # accel_opc=dualcast
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=software
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@23 -- # accel_module=software
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=32
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=32
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=1
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=Yes
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:36.004   00:39:25	-- accel/accel.sh@21 -- # val=
00:08:36.004   00:39:25	-- accel/accel.sh@22 -- # case "$var" in
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # IFS=:
00:08:36.004   00:39:25	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@21 -- # val=
00:08:37.382   00:39:26	-- accel/accel.sh@22 -- # case "$var" in
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # IFS=:
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@21 -- # val=
00:08:37.382   00:39:26	-- accel/accel.sh@22 -- # case "$var" in
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # IFS=:
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@21 -- # val=
00:08:37.382   00:39:26	-- accel/accel.sh@22 -- # case "$var" in
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # IFS=:
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@21 -- # val=
00:08:37.382   00:39:26	-- accel/accel.sh@22 -- # case "$var" in
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # IFS=:
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@21 -- # val=
00:08:37.382   00:39:26	-- accel/accel.sh@22 -- # case "$var" in
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # IFS=:
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@21 -- # val=
00:08:37.382   00:39:26	-- accel/accel.sh@22 -- # case "$var" in
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # IFS=:
00:08:37.382   00:39:26	-- accel/accel.sh@20 -- # read -r var val
00:08:37.382   00:39:26	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:37.382   00:39:26	-- accel/accel.sh@28 -- # [[ -n dualcast ]]
00:08:37.382   00:39:26	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:37.382  
00:08:37.382  real	0m2.741s
00:08:37.382  user	0m2.431s
00:08:37.382  sys	0m0.314s
00:08:37.382   00:39:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:37.382   00:39:26	-- common/autotest_common.sh@10 -- # set +x
00:08:37.382  ************************************
00:08:37.382  END TEST accel_dualcast
00:08:37.382  ************************************
00:08:37.382   00:39:26	-- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y
00:08:37.382   00:39:26	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:37.382   00:39:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:37.382   00:39:26	-- common/autotest_common.sh@10 -- # set +x
00:08:37.382  ************************************
00:08:37.382  START TEST accel_compare
00:08:37.382  ************************************
00:08:37.382   00:39:26	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y
00:08:37.382   00:39:26	-- accel/accel.sh@16 -- # local accel_opc
00:08:37.382   00:39:26	-- accel/accel.sh@17 -- # local accel_module
00:08:37.382    00:39:26	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y
00:08:37.382    00:39:26	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:08:37.382     00:39:26	-- accel/accel.sh@12 -- # build_accel_config
00:08:37.382     00:39:26	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:37.382     00:39:26	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:37.382     00:39:26	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:37.382     00:39:26	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:37.382     00:39:26	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:37.382     00:39:26	-- accel/accel.sh@41 -- # local IFS=,
00:08:37.382     00:39:26	-- accel/accel.sh@42 -- # jq -r .
00:08:37.382  [2024-12-17 00:39:26.415548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:37.382  [2024-12-17 00:39:26.415618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953920 ]
00:08:37.382  EAL: No free 2048 kB hugepages reported on node 1
00:08:37.382  [2024-12-17 00:39:26.523163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:37.382  [2024-12-17 00:39:26.573600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:38.761   00:39:27	-- accel/accel.sh@18 -- # out='
00:08:38.761  SPDK Configuration:
00:08:38.761  Core mask:      0x1
00:08:38.761  
00:08:38.761  Accel Perf Configuration:
00:08:38.761  Workload Type:  compare
00:08:38.761  Transfer size:  4096 bytes
00:08:38.761  Vector count    1
00:08:38.761  Module:         software
00:08:38.761  Queue depth:    32
00:08:38.761  Allocate depth: 32
00:08:38.761  # threads/core: 1
00:08:38.761  Run time:       1 seconds
00:08:38.761  Verify:         Yes
00:08:38.761  
00:08:38.761  Running for 1 seconds...
00:08:38.761  
00:08:38.761  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:38.761  ------------------------------------------------------------------------------------
00:08:38.761  0,0                      398656/s       1557 MiB/s                0                0
00:08:38.761  ====================================================================================
00:08:38.761  Total                    398656/s       1557 MiB/s                0                0'
00:08:38.761   00:39:27	-- accel/accel.sh@20 -- # IFS=:
00:08:38.761   00:39:27	-- accel/accel.sh@20 -- # read -r var val
00:08:38.761    00:39:27	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y
00:08:38.761    00:39:27	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:08:38.761     00:39:27	-- accel/accel.sh@12 -- # build_accel_config
00:08:38.761     00:39:27	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:38.761     00:39:27	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:38.761     00:39:27	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:38.761     00:39:27	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:38.761     00:39:27	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:38.761     00:39:27	-- accel/accel.sh@41 -- # local IFS=,
00:08:38.761     00:39:27	-- accel/accel.sh@42 -- # jq -r .
00:08:38.761  [2024-12-17 00:39:27.808883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:38.761  [2024-12-17 00:39:27.809027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954106 ]
00:08:38.761  EAL: No free 2048 kB hugepages reported on node 1
00:08:38.761  [2024-12-17 00:39:27.916176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:38.761  [2024-12-17 00:39:27.965662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:38.761   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:38.761   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:38.761   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:38.761   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:38.761   00:39:28	-- accel/accel.sh@21 -- # val=0x1
00:08:38.761   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:38.761   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:38.761   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:38.761   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=compare
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@24 -- # accel_opc=compare
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=software
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@23 -- # accel_module=software
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=32
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=32
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.020   00:39:28	-- accel/accel.sh@21 -- # val=1
00:08:39.020   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.020   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.021   00:39:28	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:39.021   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.021   00:39:28	-- accel/accel.sh@21 -- # val=Yes
00:08:39.021   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.021   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:39.021   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.021   00:39:28	-- accel/accel.sh@21 -- # val=
00:08:39.021   00:39:28	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # IFS=:
00:08:39.021   00:39:28	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@21 -- # val=
00:08:39.959   00:39:29	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # IFS=:
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@21 -- # val=
00:08:39.959   00:39:29	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # IFS=:
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@21 -- # val=
00:08:39.959   00:39:29	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # IFS=:
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@21 -- # val=
00:08:39.959   00:39:29	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # IFS=:
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@21 -- # val=
00:08:39.959   00:39:29	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # IFS=:
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@21 -- # val=
00:08:39.959   00:39:29	-- accel/accel.sh@22 -- # case "$var" in
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # IFS=:
00:08:39.959   00:39:29	-- accel/accel.sh@20 -- # read -r var val
00:08:39.959   00:39:29	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:39.959   00:39:29	-- accel/accel.sh@28 -- # [[ -n compare ]]
00:08:39.959   00:39:29	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:39.959  
00:08:39.959  real	0m2.788s
00:08:39.959  user	0m2.452s
00:08:39.959  sys	0m0.340s
00:08:39.959   00:39:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:39.959   00:39:29	-- common/autotest_common.sh@10 -- # set +x
00:08:39.959  ************************************
00:08:39.959  END TEST accel_compare
00:08:39.959  ************************************
00:08:39.959   00:39:29	-- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y
00:08:39.959   00:39:29	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:39.959   00:39:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:39.959   00:39:29	-- common/autotest_common.sh@10 -- # set +x
00:08:40.219  ************************************
00:08:40.219  START TEST accel_xor
00:08:40.219  ************************************
00:08:40.219   00:39:29	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y
00:08:40.219   00:39:29	-- accel/accel.sh@16 -- # local accel_opc
00:08:40.219   00:39:29	-- accel/accel.sh@17 -- # local accel_module
00:08:40.219    00:39:29	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y
00:08:40.219    00:39:29	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:08:40.219     00:39:29	-- accel/accel.sh@12 -- # build_accel_config
00:08:40.219     00:39:29	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:40.219     00:39:29	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:40.219     00:39:29	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:40.219     00:39:29	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:40.219     00:39:29	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:40.219     00:39:29	-- accel/accel.sh@41 -- # local IFS=,
00:08:40.219     00:39:29	-- accel/accel.sh@42 -- # jq -r .
00:08:40.219  [2024-12-17 00:39:29.252623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:40.219  [2024-12-17 00:39:29.252690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954345 ]
00:08:40.219  EAL: No free 2048 kB hugepages reported on node 1
00:08:40.219  [2024-12-17 00:39:29.360061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:40.219  [2024-12-17 00:39:29.410693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:41.596   00:39:30	-- accel/accel.sh@18 -- # out='
00:08:41.596  SPDK Configuration:
00:08:41.596  Core mask:      0x1
00:08:41.596  
00:08:41.596  Accel Perf Configuration:
00:08:41.596  Workload Type:  xor
00:08:41.596  Source buffers: 2
00:08:41.596  Transfer size:  4096 bytes
00:08:41.596  Vector count    1
00:08:41.596  Module:         software
00:08:41.596  Queue depth:    32
00:08:41.596  Allocate depth: 32
00:08:41.596  # threads/core: 1
00:08:41.596  Run time:       1 seconds
00:08:41.596  Verify:         Yes
00:08:41.596  
00:08:41.596  Running for 1 seconds...
00:08:41.596  
00:08:41.596  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:41.596  ------------------------------------------------------------------------------------
00:08:41.596  0,0                      325760/s       1272 MiB/s                0                0
00:08:41.596  ====================================================================================
00:08:41.596  Total                    325760/s       1272 MiB/s                0                0'
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.596    00:39:30	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y
00:08:41.596    00:39:30	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:08:41.596     00:39:30	-- accel/accel.sh@12 -- # build_accel_config
00:08:41.596     00:39:30	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:41.596     00:39:30	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:41.596     00:39:30	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:41.596     00:39:30	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:41.596     00:39:30	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:41.596     00:39:30	-- accel/accel.sh@41 -- # local IFS=,
00:08:41.596     00:39:30	-- accel/accel.sh@42 -- # jq -r .
00:08:41.596  [2024-12-17 00:39:30.646196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:41.596  [2024-12-17 00:39:30.646266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954546 ]
00:08:41.596  EAL: No free 2048 kB hugepages reported on node 1
00:08:41.596  [2024-12-17 00:39:30.753000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:41.596  [2024-12-17 00:39:30.802145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:41.596   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.596   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.596   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.596   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.596   00:39:30	-- accel/accel.sh@21 -- # val=0x1
00:08:41.596   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.596   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=xor
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@24 -- # accel_opc=xor
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=2
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=software
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@23 -- # accel_module=software
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=32
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=32
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=1
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=Yes
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:41.856   00:39:30	-- accel/accel.sh@21 -- # val=
00:08:41.856   00:39:30	-- accel/accel.sh@22 -- # case "$var" in
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # IFS=:
00:08:41.856   00:39:30	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@21 -- # val=
00:08:42.793   00:39:32	-- accel/accel.sh@22 -- # case "$var" in
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # IFS=:
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@21 -- # val=
00:08:42.793   00:39:32	-- accel/accel.sh@22 -- # case "$var" in
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # IFS=:
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@21 -- # val=
00:08:42.793   00:39:32	-- accel/accel.sh@22 -- # case "$var" in
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # IFS=:
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@21 -- # val=
00:08:42.793   00:39:32	-- accel/accel.sh@22 -- # case "$var" in
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # IFS=:
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@21 -- # val=
00:08:42.793   00:39:32	-- accel/accel.sh@22 -- # case "$var" in
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # IFS=:
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@21 -- # val=
00:08:42.793   00:39:32	-- accel/accel.sh@22 -- # case "$var" in
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # IFS=:
00:08:42.793   00:39:32	-- accel/accel.sh@20 -- # read -r var val
00:08:42.793   00:39:32	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:42.793   00:39:32	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:08:42.793   00:39:32	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:42.793  
00:08:42.793  real	0m2.791s
00:08:42.793  user	0m2.460s
00:08:42.793  sys	0m0.336s
00:08:42.793   00:39:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:42.793   00:39:32	-- common/autotest_common.sh@10 -- # set +x
00:08:42.793  ************************************
00:08:42.793  END TEST accel_xor
00:08:42.793  ************************************
00:08:43.052   00:39:32	-- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3
00:08:43.052   00:39:32	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:08:43.052   00:39:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:43.052   00:39:32	-- common/autotest_common.sh@10 -- # set +x
00:08:43.052  ************************************
00:08:43.052  START TEST accel_xor
00:08:43.052  ************************************
00:08:43.052   00:39:32	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3
00:08:43.052   00:39:32	-- accel/accel.sh@16 -- # local accel_opc
00:08:43.052   00:39:32	-- accel/accel.sh@17 -- # local accel_module
00:08:43.052    00:39:32	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3
00:08:43.052    00:39:32	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:08:43.053     00:39:32	-- accel/accel.sh@12 -- # build_accel_config
00:08:43.053     00:39:32	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:43.053     00:39:32	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:43.053     00:39:32	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:43.053     00:39:32	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:43.053     00:39:32	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:43.053     00:39:32	-- accel/accel.sh@41 -- # local IFS=,
00:08:43.053     00:39:32	-- accel/accel.sh@42 -- # jq -r .
00:08:43.053  [2024-12-17 00:39:32.077643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:43.053  [2024-12-17 00:39:32.077698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954782 ]
00:08:43.053  EAL: No free 2048 kB hugepages reported on node 1
00:08:43.053  [2024-12-17 00:39:32.168701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:43.053  [2024-12-17 00:39:32.218916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:44.435   00:39:33	-- accel/accel.sh@18 -- # out='
00:08:44.436  SPDK Configuration:
00:08:44.436  Core mask:      0x1
00:08:44.436  
00:08:44.436  Accel Perf Configuration:
00:08:44.436  Workload Type:  xor
00:08:44.436  Source buffers: 3
00:08:44.436  Transfer size:  4096 bytes
00:08:44.436  Vector count    1
00:08:44.436  Module:         software
00:08:44.436  Queue depth:    32
00:08:44.436  Allocate depth: 32
00:08:44.436  # threads/core: 1
00:08:44.436  Run time:       1 seconds
00:08:44.436  Verify:         Yes
00:08:44.436  
00:08:44.436  Running for 1 seconds...
00:08:44.436  
00:08:44.436  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:44.436  ------------------------------------------------------------------------------------
00:08:44.436  0,0                      306560/s       1197 MiB/s                0                0
00:08:44.436  ====================================================================================
00:08:44.436  Total                    306560/s       1197 MiB/s                0                0'
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436    00:39:33	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3
00:08:44.436    00:39:33	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:08:44.436     00:39:33	-- accel/accel.sh@12 -- # build_accel_config
00:08:44.436     00:39:33	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:44.436     00:39:33	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:44.436     00:39:33	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:44.436     00:39:33	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:44.436     00:39:33	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:44.436     00:39:33	-- accel/accel.sh@41 -- # local IFS=,
00:08:44.436     00:39:33	-- accel/accel.sh@42 -- # jq -r .
00:08:44.436  [2024-12-17 00:39:33.454918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:44.436  [2024-12-17 00:39:33.454997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954963 ]
00:08:44.436  EAL: No free 2048 kB hugepages reported on node 1
00:08:44.436  [2024-12-17 00:39:33.560811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:44.436  [2024-12-17 00:39:33.610304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=0x1
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=xor
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@24 -- # accel_opc=xor
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=3
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=software
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@23 -- # accel_module=software
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=32
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=32
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=1
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=Yes
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:44.436   00:39:33	-- accel/accel.sh@21 -- # val=
00:08:44.436   00:39:33	-- accel/accel.sh@22 -- # case "$var" in
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # IFS=:
00:08:44.436   00:39:33	-- accel/accel.sh@20 -- # read -r var val
00:08:45.814   00:39:34	-- accel/accel.sh@21 -- # val=
00:08:45.815   00:39:34	-- accel/accel.sh@22 -- # case "$var" in
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # IFS=:
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # read -r var val
00:08:45.815   00:39:34	-- accel/accel.sh@21 -- # val=
00:08:45.815   00:39:34	-- accel/accel.sh@22 -- # case "$var" in
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # IFS=:
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # read -r var val
00:08:45.815   00:39:34	-- accel/accel.sh@21 -- # val=
00:08:45.815   00:39:34	-- accel/accel.sh@22 -- # case "$var" in
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # IFS=:
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # read -r var val
00:08:45.815   00:39:34	-- accel/accel.sh@21 -- # val=
00:08:45.815   00:39:34	-- accel/accel.sh@22 -- # case "$var" in
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # IFS=:
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # read -r var val
00:08:45.815   00:39:34	-- accel/accel.sh@21 -- # val=
00:08:45.815   00:39:34	-- accel/accel.sh@22 -- # case "$var" in
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # IFS=:
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # read -r var val
00:08:45.815   00:39:34	-- accel/accel.sh@21 -- # val=
00:08:45.815   00:39:34	-- accel/accel.sh@22 -- # case "$var" in
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # IFS=:
00:08:45.815   00:39:34	-- accel/accel.sh@20 -- # read -r var val
00:08:45.815   00:39:34	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:45.815   00:39:34	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:08:45.815   00:39:34	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:45.815  
00:08:45.815  real	0m2.759s
00:08:45.815  user	0m2.442s
00:08:45.815  sys	0m0.321s
00:08:45.815   00:39:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:45.815   00:39:34	-- common/autotest_common.sh@10 -- # set +x
00:08:45.815  ************************************
00:08:45.815  END TEST accel_xor
00:08:45.815  ************************************
00:08:45.815   00:39:34	-- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify
00:08:45.815   00:39:34	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:08:45.815   00:39:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:45.815   00:39:34	-- common/autotest_common.sh@10 -- # set +x
00:08:45.815  ************************************
00:08:45.815  START TEST accel_dif_verify
00:08:45.815  ************************************
00:08:45.815   00:39:34	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify
00:08:45.815   00:39:34	-- accel/accel.sh@16 -- # local accel_opc
00:08:45.815   00:39:34	-- accel/accel.sh@17 -- # local accel_module
00:08:45.815    00:39:34	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify
00:08:45.815    00:39:34	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:08:45.815     00:39:34	-- accel/accel.sh@12 -- # build_accel_config
00:08:45.815     00:39:34	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:45.815     00:39:34	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:45.815     00:39:34	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:45.815     00:39:34	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:45.815     00:39:34	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:45.815     00:39:34	-- accel/accel.sh@41 -- # local IFS=,
00:08:45.815     00:39:34	-- accel/accel.sh@42 -- # jq -r .
00:08:45.815  [2024-12-17 00:39:34.895017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:45.815  [2024-12-17 00:39:34.895086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955205 ]
00:08:45.815  EAL: No free 2048 kB hugepages reported on node 1
00:08:45.815  [2024-12-17 00:39:35.001091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:45.815  [2024-12-17 00:39:35.047703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:47.193   00:39:36	-- accel/accel.sh@18 -- # out='
00:08:47.193  SPDK Configuration:
00:08:47.193  Core mask:      0x1
00:08:47.193  
00:08:47.193  Accel Perf Configuration:
00:08:47.193  Workload Type:  dif_verify
00:08:47.193  Vector size:    4096 bytes
00:08:47.193  Transfer size:  4096 bytes
00:08:47.193  Block size:     512 bytes
00:08:47.193  Metadata size:  8 bytes
00:08:47.193  Vector count    1
00:08:47.193  Module:         software
00:08:47.193  Queue depth:    32
00:08:47.193  Allocate depth: 32
00:08:47.193  # threads/core: 1
00:08:47.193  Run time:       1 seconds
00:08:47.193  Verify:         No
00:08:47.193  
00:08:47.194  Running for 1 seconds...
00:08:47.194  
00:08:47.194  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:47.194  ------------------------------------------------------------------------------------
00:08:47.194  0,0                       85184/s        337 MiB/s                0                0
00:08:47.194  ====================================================================================
00:08:47.194  Total                     85184/s        332 MiB/s                0                0'
00:08:47.194   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.194   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.194    00:39:36	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify
00:08:47.194     00:39:36	-- accel/accel.sh@12 -- # build_accel_config
00:08:47.194    00:39:36	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:08:47.194     00:39:36	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:47.194     00:39:36	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:47.194     00:39:36	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:47.194     00:39:36	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:47.194     00:39:36	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:47.194     00:39:36	-- accel/accel.sh@41 -- # local IFS=,
00:08:47.194     00:39:36	-- accel/accel.sh@42 -- # jq -r .
00:08:47.194  [2024-12-17 00:39:36.275046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:47.194  [2024-12-17 00:39:36.275114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955391 ]
00:08:47.194  EAL: No free 2048 kB hugepages reported on node 1
00:08:47.194  [2024-12-17 00:39:36.376826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:47.194  [2024-12-17 00:39:36.423105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=0x1
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=dif_verify
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@24 -- # accel_opc=dif_verify
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val='512 bytes'
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val='8 bytes'
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=software
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@23 -- # accel_module=software
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=32
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=32
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=1
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=No
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:47.453   00:39:36	-- accel/accel.sh@21 -- # val=
00:08:47.453   00:39:36	-- accel/accel.sh@22 -- # case "$var" in
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # IFS=:
00:08:47.453   00:39:36	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@21 -- # val=
00:08:48.391   00:39:37	-- accel/accel.sh@22 -- # case "$var" in
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # IFS=:
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@21 -- # val=
00:08:48.391   00:39:37	-- accel/accel.sh@22 -- # case "$var" in
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # IFS=:
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@21 -- # val=
00:08:48.391   00:39:37	-- accel/accel.sh@22 -- # case "$var" in
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # IFS=:
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@21 -- # val=
00:08:48.391   00:39:37	-- accel/accel.sh@22 -- # case "$var" in
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # IFS=:
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@21 -- # val=
00:08:48.391   00:39:37	-- accel/accel.sh@22 -- # case "$var" in
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # IFS=:
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@21 -- # val=
00:08:48.391   00:39:37	-- accel/accel.sh@22 -- # case "$var" in
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # IFS=:
00:08:48.391   00:39:37	-- accel/accel.sh@20 -- # read -r var val
00:08:48.391   00:39:37	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:48.391   00:39:37	-- accel/accel.sh@28 -- # [[ -n dif_verify ]]
00:08:48.391   00:39:37	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:48.391  
00:08:48.391  real	0m2.755s
00:08:48.391  user	0m2.446s
00:08:48.391  sys	0m0.318s
00:08:48.391   00:39:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:48.391   00:39:37	-- common/autotest_common.sh@10 -- # set +x
00:08:48.391  ************************************
00:08:48.391  END TEST accel_dif_verify
00:08:48.391  ************************************
00:08:48.650   00:39:37	-- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate
00:08:48.650   00:39:37	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:08:48.650   00:39:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:48.650   00:39:37	-- common/autotest_common.sh@10 -- # set +x
00:08:48.650  ************************************
00:08:48.650  START TEST accel_dif_generate
00:08:48.650  ************************************
00:08:48.650   00:39:37	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate
00:08:48.650   00:39:37	-- accel/accel.sh@16 -- # local accel_opc
00:08:48.650   00:39:37	-- accel/accel.sh@17 -- # local accel_module
00:08:48.650    00:39:37	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate
00:08:48.651    00:39:37	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:08:48.651     00:39:37	-- accel/accel.sh@12 -- # build_accel_config
00:08:48.651     00:39:37	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:48.651     00:39:37	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:48.651     00:39:37	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:48.651     00:39:37	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:48.651     00:39:37	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:48.651     00:39:37	-- accel/accel.sh@41 -- # local IFS=,
00:08:48.651     00:39:37	-- accel/accel.sh@42 -- # jq -r .
00:08:48.651  [2024-12-17 00:39:37.693105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:48.651  [2024-12-17 00:39:37.693179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955590 ]
00:08:48.651  EAL: No free 2048 kB hugepages reported on node 1
00:08:48.651  [2024-12-17 00:39:37.798161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:48.651  [2024-12-17 00:39:37.847108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:50.029   00:39:39	-- accel/accel.sh@18 -- # out='
00:08:50.029  SPDK Configuration:
00:08:50.029  Core mask:      0x1
00:08:50.029  
00:08:50.029  Accel Perf Configuration:
00:08:50.029  Workload Type:  dif_generate
00:08:50.029  Vector size:    4096 bytes
00:08:50.029  Transfer size:  4096 bytes
00:08:50.029  Block size:     512 bytes
00:08:50.029  Metadata size:  8 bytes
00:08:50.029  Vector count    1
00:08:50.029  Module:         software
00:08:50.029  Queue depth:    32
00:08:50.029  Allocate depth: 32
00:08:50.029  # threads/core: 1
00:08:50.029  Run time:       1 seconds
00:08:50.029  Verify:         No
00:08:50.029  
00:08:50.029  Running for 1 seconds...
00:08:50.029  
00:08:50.029  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:50.029  ------------------------------------------------------------------------------------
00:08:50.029  0,0                      102432/s        406 MiB/s                0                0
00:08:50.029  ====================================================================================
00:08:50.029  Total                    102432/s        400 MiB/s                0                0'
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029    00:39:39	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate
00:08:50.029    00:39:39	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:08:50.029     00:39:39	-- accel/accel.sh@12 -- # build_accel_config
00:08:50.029     00:39:39	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:50.029     00:39:39	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:50.029     00:39:39	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:50.029     00:39:39	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:50.029     00:39:39	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:50.029     00:39:39	-- accel/accel.sh@41 -- # local IFS=,
00:08:50.029     00:39:39	-- accel/accel.sh@42 -- # jq -r .
00:08:50.029  [2024-12-17 00:39:39.069701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:50.029  [2024-12-17 00:39:39.069770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955771 ]
00:08:50.029  EAL: No free 2048 kB hugepages reported on node 1
00:08:50.029  [2024-12-17 00:39:39.172185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:50.029  [2024-12-17 00:39:39.221410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=0x1
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=dif_generate
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@24 -- # accel_opc=dif_generate
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val='512 bytes'
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val='8 bytes'
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=software
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@23 -- # accel_module=software
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=32
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=32
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=1
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:50.029   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.029   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.029   00:39:39	-- accel/accel.sh@21 -- # val=No
00:08:50.289   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.289   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.289   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.289   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.289   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.289   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.289   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:50.289   00:39:39	-- accel/accel.sh@21 -- # val=
00:08:50.289   00:39:39	-- accel/accel.sh@22 -- # case "$var" in
00:08:50.289   00:39:39	-- accel/accel.sh@20 -- # IFS=:
00:08:50.289   00:39:39	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@21 -- # val=
00:08:51.226   00:39:40	-- accel/accel.sh@22 -- # case "$var" in
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # IFS=:
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@21 -- # val=
00:08:51.226   00:39:40	-- accel/accel.sh@22 -- # case "$var" in
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # IFS=:
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@21 -- # val=
00:08:51.226   00:39:40	-- accel/accel.sh@22 -- # case "$var" in
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # IFS=:
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@21 -- # val=
00:08:51.226   00:39:40	-- accel/accel.sh@22 -- # case "$var" in
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # IFS=:
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@21 -- # val=
00:08:51.226   00:39:40	-- accel/accel.sh@22 -- # case "$var" in
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # IFS=:
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@21 -- # val=
00:08:51.226   00:39:40	-- accel/accel.sh@22 -- # case "$var" in
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # IFS=:
00:08:51.226   00:39:40	-- accel/accel.sh@20 -- # read -r var val
00:08:51.226   00:39:40	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:51.226   00:39:40	-- accel/accel.sh@28 -- # [[ -n dif_generate ]]
00:08:51.226   00:39:40	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:51.226  
00:08:51.226  real	0m2.770s
00:08:51.226  user	0m2.437s
00:08:51.226  sys	0m0.341s
00:08:51.226   00:39:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:51.226   00:39:40	-- common/autotest_common.sh@10 -- # set +x
00:08:51.226  ************************************
00:08:51.226  END TEST accel_dif_generate
00:08:51.226  ************************************
00:08:51.226   00:39:40	-- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy
00:08:51.226   00:39:40	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:08:51.226   00:39:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:51.226   00:39:40	-- common/autotest_common.sh@10 -- # set +x
00:08:51.226  ************************************
00:08:51.226  START TEST accel_dif_generate_copy
00:08:51.226  ************************************
00:08:51.226   00:39:40	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy
00:08:51.226   00:39:40	-- accel/accel.sh@16 -- # local accel_opc
00:08:51.226   00:39:40	-- accel/accel.sh@17 -- # local accel_module
00:08:51.226    00:39:40	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy
00:08:51.227    00:39:40	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:08:51.227     00:39:40	-- accel/accel.sh@12 -- # build_accel_config
00:08:51.227     00:39:40	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:51.227     00:39:40	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:51.227     00:39:40	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:51.227     00:39:40	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:51.227     00:39:40	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:51.227     00:39:40	-- accel/accel.sh@41 -- # local IFS=,
00:08:51.227     00:39:40	-- accel/accel.sh@42 -- # jq -r .
00:08:51.486  [2024-12-17 00:39:40.505774] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:51.486  [2024-12-17 00:39:40.505840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955966 ]
00:08:51.486  EAL: No free 2048 kB hugepages reported on node 1
00:08:51.486  [2024-12-17 00:39:40.612328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:51.486  [2024-12-17 00:39:40.662299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:53.004   00:39:41	-- accel/accel.sh@18 -- # out='
00:08:53.004  SPDK Configuration:
00:08:53.004  Core mask:      0x1
00:08:53.004  
00:08:53.004  Accel Perf Configuration:
00:08:53.004  Workload Type:  dif_generate_copy
00:08:53.004  Vector size:    4096 bytes
00:08:53.004  Transfer size:  4096 bytes
00:08:53.004  Vector count    1
00:08:53.004  Module:         software
00:08:53.004  Queue depth:    32
00:08:53.004  Allocate depth: 32
00:08:53.004  # threads/core: 1
00:08:53.004  Run time:       1 seconds
00:08:53.004  Verify:         No
00:08:53.004  
00:08:53.004  Running for 1 seconds...
00:08:53.004  
00:08:53.004  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:53.004  ------------------------------------------------------------------------------------
00:08:53.004  0,0                       79168/s        314 MiB/s                0                0
00:08:53.004  ====================================================================================
00:08:53.004  Total                     79168/s        309 MiB/s                0                0'
00:08:53.004   00:39:41	-- accel/accel.sh@20 -- # IFS=:
00:08:53.004   00:39:41	-- accel/accel.sh@20 -- # read -r var val
00:08:53.004    00:39:41	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy
00:08:53.004    00:39:41	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:08:53.004     00:39:41	-- accel/accel.sh@12 -- # build_accel_config
00:08:53.004     00:39:41	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:53.004     00:39:41	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:53.004     00:39:41	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:53.004     00:39:41	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:53.004     00:39:41	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:53.004     00:39:41	-- accel/accel.sh@41 -- # local IFS=,
00:08:53.004     00:39:41	-- accel/accel.sh@42 -- # jq -r .
00:08:53.004  [2024-12-17 00:39:41.897875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:53.004  [2024-12-17 00:39:41.897952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956157 ]
00:08:53.004  EAL: No free 2048 kB hugepages reported on node 1
00:08:53.004  [2024-12-17 00:39:42.003458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:53.004  [2024-12-17 00:39:42.052721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:53.004   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.004   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.004   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.004   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.004   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.004   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.004   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.004   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.004   00:39:42	-- accel/accel.sh@21 -- # val=0x1
00:08:53.004   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.004   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.004   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.004   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=dif_generate_copy
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@24 -- # accel_opc=dif_generate_copy
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=software
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@23 -- # accel_module=software
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=32
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=32
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=1
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=No
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:53.005   00:39:42	-- accel/accel.sh@21 -- # val=
00:08:53.005   00:39:42	-- accel/accel.sh@22 -- # case "$var" in
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # IFS=:
00:08:53.005   00:39:42	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@21 -- # val=
00:08:54.384   00:39:43	-- accel/accel.sh@22 -- # case "$var" in
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # IFS=:
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@21 -- # val=
00:08:54.384   00:39:43	-- accel/accel.sh@22 -- # case "$var" in
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # IFS=:
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@21 -- # val=
00:08:54.384   00:39:43	-- accel/accel.sh@22 -- # case "$var" in
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # IFS=:
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@21 -- # val=
00:08:54.384   00:39:43	-- accel/accel.sh@22 -- # case "$var" in
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # IFS=:
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@21 -- # val=
00:08:54.384   00:39:43	-- accel/accel.sh@22 -- # case "$var" in
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # IFS=:
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@21 -- # val=
00:08:54.384   00:39:43	-- accel/accel.sh@22 -- # case "$var" in
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # IFS=:
00:08:54.384   00:39:43	-- accel/accel.sh@20 -- # read -r var val
00:08:54.384   00:39:43	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:54.384   00:39:43	-- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]]
00:08:54.384   00:39:43	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:54.384  
00:08:54.384  real	0m2.787s
00:08:54.384  user	0m2.453s
00:08:54.384  sys	0m0.339s
00:08:54.384   00:39:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:54.384   00:39:43	-- common/autotest_common.sh@10 -- # set +x
00:08:54.384  ************************************
00:08:54.384  END TEST accel_dif_generate_copy
00:08:54.384  ************************************
00:08:54.384   00:39:43	-- accel/accel.sh@107 -- # [[ y == y ]]
00:08:54.384   00:39:43	-- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:54.384   00:39:43	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:08:54.384   00:39:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:54.384   00:39:43	-- common/autotest_common.sh@10 -- # set +x
00:08:54.384  ************************************
00:08:54.384  START TEST accel_comp
00:08:54.384  ************************************
00:08:54.384   00:39:43	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:54.384   00:39:43	-- accel/accel.sh@16 -- # local accel_opc
00:08:54.384   00:39:43	-- accel/accel.sh@17 -- # local accel_module
00:08:54.384    00:39:43	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:54.384    00:39:43	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:54.384     00:39:43	-- accel/accel.sh@12 -- # build_accel_config
00:08:54.384     00:39:43	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:54.384     00:39:43	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:54.384     00:39:43	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:54.384     00:39:43	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:54.384     00:39:43	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:54.384     00:39:43	-- accel/accel.sh@41 -- # local IFS=,
00:08:54.384     00:39:43	-- accel/accel.sh@42 -- # jq -r .
00:08:54.384  [2024-12-17 00:39:43.339538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:54.385  [2024-12-17 00:39:43.339605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956350 ]
00:08:54.385  EAL: No free 2048 kB hugepages reported on node 1
00:08:54.385  [2024-12-17 00:39:43.441780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:54.385  [2024-12-17 00:39:43.491585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:55.764   00:39:44	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:55.764  
00:08:55.764  SPDK Configuration:
00:08:55.764  Core mask:      0x1
00:08:55.764  
00:08:55.764  Accel Perf Configuration:
00:08:55.764  Workload Type:  compress
00:08:55.764  Transfer size:  4096 bytes
00:08:55.764  Vector count    1
00:08:55.764  Module:         software
00:08:55.764  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:55.764  Queue depth:    32
00:08:55.764  Allocate depth: 32
00:08:55.764  # threads/core: 1
00:08:55.764  Run time:       1 seconds
00:08:55.764  Verify:         No
00:08:55.764  
00:08:55.764  Running for 1 seconds...
00:08:55.764  
00:08:55.764  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:55.764  ------------------------------------------------------------------------------------
00:08:55.764  0,0                       42464/s        177 MiB/s                0                0
00:08:55.764  ====================================================================================
00:08:55.764  Total                     42464/s        165 MiB/s                0                0'
00:08:55.764   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.764   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.764    00:39:44	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:55.764    00:39:44	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:55.764     00:39:44	-- accel/accel.sh@12 -- # build_accel_config
00:08:55.764     00:39:44	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:55.764     00:39:44	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:55.764     00:39:44	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:55.764     00:39:44	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:55.764     00:39:44	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:55.764     00:39:44	-- accel/accel.sh@41 -- # local IFS=,
00:08:55.764     00:39:44	-- accel/accel.sh@42 -- # jq -r .
00:08:55.764  [2024-12-17 00:39:44.731194] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:55.764  [2024-12-17 00:39:44.731262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956530 ]
00:08:55.764  EAL: No free 2048 kB hugepages reported on node 1
00:08:55.764  [2024-12-17 00:39:44.837398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:55.764  [2024-12-17 00:39:44.906506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:55.764   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.764   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.764   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.764   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.764   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.764   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=0x1
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=compress
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@24 -- # accel_opc=compress
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=software
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@23 -- # accel_module=software
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=32
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=32
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=1
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=No
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:55.765   00:39:44	-- accel/accel.sh@21 -- # val=
00:08:55.765   00:39:44	-- accel/accel.sh@22 -- # case "$var" in
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # IFS=:
00:08:55.765   00:39:44	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@21 -- # val=
00:08:57.151   00:39:46	-- accel/accel.sh@22 -- # case "$var" in
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # IFS=:
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@21 -- # val=
00:08:57.151   00:39:46	-- accel/accel.sh@22 -- # case "$var" in
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # IFS=:
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@21 -- # val=
00:08:57.151   00:39:46	-- accel/accel.sh@22 -- # case "$var" in
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # IFS=:
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@21 -- # val=
00:08:57.151   00:39:46	-- accel/accel.sh@22 -- # case "$var" in
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # IFS=:
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@21 -- # val=
00:08:57.151   00:39:46	-- accel/accel.sh@22 -- # case "$var" in
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # IFS=:
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@21 -- # val=
00:08:57.151   00:39:46	-- accel/accel.sh@22 -- # case "$var" in
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # IFS=:
00:08:57.151   00:39:46	-- accel/accel.sh@20 -- # read -r var val
00:08:57.151   00:39:46	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:57.151   00:39:46	-- accel/accel.sh@28 -- # [[ -n compress ]]
00:08:57.151   00:39:46	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:57.151  
00:08:57.151  real	0m2.812s
00:08:57.151  user	0m2.461s
00:08:57.151  sys	0m0.348s
00:08:57.151   00:39:46	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:57.151   00:39:46	-- common/autotest_common.sh@10 -- # set +x
00:08:57.151  ************************************
00:08:57.151  END TEST accel_comp
00:08:57.151  ************************************
00:08:57.151   00:39:46	-- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:57.151   00:39:46	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:08:57.151   00:39:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:57.151   00:39:46	-- common/autotest_common.sh@10 -- # set +x
00:08:57.151  ************************************
00:08:57.151  START TEST accel_decomp
00:08:57.151  ************************************
00:08:57.151   00:39:46	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:57.151   00:39:46	-- accel/accel.sh@16 -- # local accel_opc
00:08:57.151   00:39:46	-- accel/accel.sh@17 -- # local accel_module
00:08:57.151    00:39:46	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:57.151    00:39:46	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:57.151     00:39:46	-- accel/accel.sh@12 -- # build_accel_config
00:08:57.151     00:39:46	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:57.151     00:39:46	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:57.151     00:39:46	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:57.151     00:39:46	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:57.151     00:39:46	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:57.151     00:39:46	-- accel/accel.sh@41 -- # local IFS=,
00:08:57.151     00:39:46	-- accel/accel.sh@42 -- # jq -r .
00:08:57.151  [2024-12-17 00:39:46.195223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:57.151  [2024-12-17 00:39:46.195294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956732 ]
00:08:57.151  EAL: No free 2048 kB hugepages reported on node 1
00:08:57.151  [2024-12-17 00:39:46.302747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:57.151  [2024-12-17 00:39:46.352606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:58.530   00:39:47	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:58.530  
00:08:58.530  SPDK Configuration:
00:08:58.530  Core mask:      0x1
00:08:58.530  
00:08:58.530  Accel Perf Configuration:
00:08:58.530  Workload Type:  decompress
00:08:58.530  Transfer size:  4096 bytes
00:08:58.530  Vector count    1
00:08:58.530  Module:         software
00:08:58.530  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:58.530  Queue depth:    32
00:08:58.530  Allocate depth: 32
00:08:58.530  # threads/core: 1
00:08:58.530  Run time:       1 seconds
00:08:58.530  Verify:         Yes
00:08:58.530  
00:08:58.530  Running for 1 seconds...
00:08:58.530  
00:08:58.530  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:58.530  ------------------------------------------------------------------------------------
00:08:58.530  0,0                       57024/s        105 MiB/s                0                0
00:08:58.530  ====================================================================================
00:08:58.530  Total                     57024/s        222 MiB/s                0                0'
00:08:58.530   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.530   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.530    00:39:47	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:58.530    00:39:47	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:08:58.530     00:39:47	-- accel/accel.sh@12 -- # build_accel_config
00:08:58.530     00:39:47	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:58.530     00:39:47	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:58.530     00:39:47	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:58.530     00:39:47	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:58.530     00:39:47	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:58.530     00:39:47	-- accel/accel.sh@41 -- # local IFS=,
00:08:58.530     00:39:47	-- accel/accel.sh@42 -- # jq -r .
00:08:58.530  [2024-12-17 00:39:47.590902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:58.530  [2024-12-17 00:39:47.590981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956912 ]
00:08:58.530  EAL: No free 2048 kB hugepages reported on node 1
00:08:58.530  [2024-12-17 00:39:47.695989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:58.530  [2024-12-17 00:39:47.745241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=0x1
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=decompress
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@24 -- # accel_opc=decompress
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=software
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@23 -- # accel_module=software
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=32
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=32
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=1
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=Yes
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:58.789   00:39:47	-- accel/accel.sh@21 -- # val=
00:08:58.789   00:39:47	-- accel/accel.sh@22 -- # case "$var" in
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # IFS=:
00:08:58.789   00:39:47	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@21 -- # val=
00:08:59.725   00:39:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # IFS=:
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@21 -- # val=
00:08:59.725   00:39:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # IFS=:
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@21 -- # val=
00:08:59.725   00:39:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # IFS=:
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@21 -- # val=
00:08:59.725   00:39:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # IFS=:
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@21 -- # val=
00:08:59.725   00:39:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # IFS=:
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@21 -- # val=
00:08:59.725   00:39:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # IFS=:
00:08:59.725   00:39:48	-- accel/accel.sh@20 -- # read -r var val
00:08:59.725   00:39:48	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:59.725   00:39:48	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:59.725   00:39:48	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:59.725  
00:08:59.725  real	0m2.794s
00:08:59.725  user	0m2.465s
00:08:59.725  sys	0m0.336s
00:08:59.725   00:39:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:59.725   00:39:48	-- common/autotest_common.sh@10 -- # set +x
00:08:59.725  ************************************
00:08:59.725  END TEST accel_decomp
00:08:59.725  ************************************
00:08:59.984   00:39:48	-- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:59.984   00:39:48	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:08:59.984   00:39:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:59.984   00:39:48	-- common/autotest_common.sh@10 -- # set +x
00:08:59.984  ************************************
00:08:59.984  START TEST accel_decmop_full
00:08:59.984  ************************************
00:08:59.984   00:39:49	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:59.984   00:39:49	-- accel/accel.sh@16 -- # local accel_opc
00:08:59.984   00:39:49	-- accel/accel.sh@17 -- # local accel_module
00:08:59.984    00:39:49	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:59.984    00:39:49	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:59.984     00:39:49	-- accel/accel.sh@12 -- # build_accel_config
00:08:59.984     00:39:49	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:59.984     00:39:49	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:59.984     00:39:49	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:59.984     00:39:49	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:59.984     00:39:49	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:59.984     00:39:49	-- accel/accel.sh@41 -- # local IFS=,
00:08:59.984     00:39:49	-- accel/accel.sh@42 -- # jq -r .
00:08:59.984  [2024-12-17 00:39:49.023056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:59.984  [2024-12-17 00:39:49.023123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957111 ]
00:08:59.984  EAL: No free 2048 kB hugepages reported on node 1
00:08:59.984  [2024-12-17 00:39:49.127068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:59.984  [2024-12-17 00:39:49.176380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:01.362   00:39:50	-- accel/accel.sh@18 -- # out='Preparing input file...
00:09:01.362  
00:09:01.362  SPDK Configuration:
00:09:01.362  Core mask:      0x1
00:09:01.362  
00:09:01.362  Accel Perf Configuration:
00:09:01.362  Workload Type:  decompress
00:09:01.362  Transfer size:  111250 bytes
00:09:01.362  Vector count    1
00:09:01.362  Module:         software
00:09:01.362  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:01.362  Queue depth:    32
00:09:01.362  Allocate depth: 32
00:09:01.362  # threads/core: 1
00:09:01.362  Run time:       1 seconds
00:09:01.362  Verify:         Yes
00:09:01.362  
00:09:01.362  Running for 1 seconds...
00:09:01.362  
00:09:01.362  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:01.362  ------------------------------------------------------------------------------------
00:09:01.362  0,0                        3808/s        157 MiB/s                0                0
00:09:01.362  ====================================================================================
00:09:01.362  Total                      3808/s        404 MiB/s                0                0'
00:09:01.362   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.362   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.362    00:39:50	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:09:01.362    00:39:50	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:09:01.362     00:39:50	-- accel/accel.sh@12 -- # build_accel_config
00:09:01.362     00:39:50	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:01.362     00:39:50	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:01.362     00:39:50	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:01.362     00:39:50	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:01.362     00:39:50	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:01.362     00:39:50	-- accel/accel.sh@41 -- # local IFS=,
00:09:01.362     00:39:50	-- accel/accel.sh@42 -- # jq -r .
00:09:01.363  [2024-12-17 00:39:50.429050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:01.363  [2024-12-17 00:39:50.429117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957294 ]
00:09:01.363  EAL: No free 2048 kB hugepages reported on node 1
00:09:01.363  [2024-12-17 00:39:50.535091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:01.363  [2024-12-17 00:39:50.582083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=0x1
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=decompress
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@24 -- # accel_opc=decompress
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val='111250 bytes'
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=software
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@23 -- # accel_module=software
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=32
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=32
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=1
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=Yes
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:01.622   00:39:50	-- accel/accel.sh@21 -- # val=
00:09:01.622   00:39:50	-- accel/accel.sh@22 -- # case "$var" in
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # IFS=:
00:09:01.622   00:39:50	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@21 -- # val=
00:09:02.559   00:39:51	-- accel/accel.sh@22 -- # case "$var" in
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # IFS=:
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@21 -- # val=
00:09:02.559   00:39:51	-- accel/accel.sh@22 -- # case "$var" in
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # IFS=:
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@21 -- # val=
00:09:02.559   00:39:51	-- accel/accel.sh@22 -- # case "$var" in
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # IFS=:
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@21 -- # val=
00:09:02.559   00:39:51	-- accel/accel.sh@22 -- # case "$var" in
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # IFS=:
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@21 -- # val=
00:09:02.559   00:39:51	-- accel/accel.sh@22 -- # case "$var" in
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # IFS=:
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@21 -- # val=
00:09:02.559   00:39:51	-- accel/accel.sh@22 -- # case "$var" in
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # IFS=:
00:09:02.559   00:39:51	-- accel/accel.sh@20 -- # read -r var val
00:09:02.559   00:39:51	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:02.559   00:39:51	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:09:02.559   00:39:51	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:02.559  
00:09:02.559  real	0m2.802s
00:09:02.559  user	0m2.482s
00:09:02.559  sys	0m0.326s
00:09:02.559   00:39:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:02.559   00:39:51	-- common/autotest_common.sh@10 -- # set +x
00:09:02.559  ************************************
00:09:02.559  END TEST accel_decmop_full
00:09:02.559  ************************************
00:09:02.819   00:39:51	-- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:09:02.819   00:39:51	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:09:02.819   00:39:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:02.819   00:39:51	-- common/autotest_common.sh@10 -- # set +x
00:09:02.819  ************************************
00:09:02.819  START TEST accel_decomp_mcore
00:09:02.819  ************************************
00:09:02.819   00:39:51	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:09:02.819   00:39:51	-- accel/accel.sh@16 -- # local accel_opc
00:09:02.819   00:39:51	-- accel/accel.sh@17 -- # local accel_module
00:09:02.819    00:39:51	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:09:02.819    00:39:51	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:09:02.819     00:39:51	-- accel/accel.sh@12 -- # build_accel_config
00:09:02.819     00:39:51	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:02.819     00:39:51	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:02.819     00:39:51	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:02.819     00:39:51	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:02.819     00:39:51	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:02.819     00:39:51	-- accel/accel.sh@41 -- # local IFS=,
00:09:02.819     00:39:51	-- accel/accel.sh@42 -- # jq -r .
00:09:02.819  [2024-12-17 00:39:51.880133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:02.819  [2024-12-17 00:39:51.880199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957491 ]
00:09:02.819  EAL: No free 2048 kB hugepages reported on node 1
00:09:02.819  [2024-12-17 00:39:51.983257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:02.819  [2024-12-17 00:39:52.032926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:02.819  [2024-12-17 00:39:52.032971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:02.819  [2024-12-17 00:39:52.033057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:02.819  [2024-12-17 00:39:52.033058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:04.198   00:39:53	-- accel/accel.sh@18 -- # out='Preparing input file...
00:09:04.198  
00:09:04.198  SPDK Configuration:
00:09:04.198  Core mask:      0xf
00:09:04.198  
00:09:04.198  Accel Perf Configuration:
00:09:04.198  Workload Type:  decompress
00:09:04.198  Transfer size:  4096 bytes
00:09:04.198  Vector count    1
00:09:04.198  Module:         software
00:09:04.198  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:04.198  Queue depth:    32
00:09:04.198  Allocate depth: 32
00:09:04.198  # threads/core: 1
00:09:04.198  Run time:       1 seconds
00:09:04.198  Verify:         Yes
00:09:04.198  
00:09:04.198  Running for 1 seconds...
00:09:04.198  
00:09:04.198  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:04.198  ------------------------------------------------------------------------------------
00:09:04.198  0,0                       50368/s         92 MiB/s                0                0
00:09:04.198  3,0                       50528/s         93 MiB/s                0                0
00:09:04.198  2,0                       71040/s        130 MiB/s                0                0
00:09:04.198  1,0                       50688/s         93 MiB/s                0                0
00:09:04.198  ====================================================================================
00:09:04.198  Total                    222624/s        869 MiB/s                0                0'
00:09:04.198   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.198   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.198    00:39:53	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:09:04.198    00:39:53	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:09:04.198     00:39:53	-- accel/accel.sh@12 -- # build_accel_config
00:09:04.198     00:39:53	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:04.198     00:39:53	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:04.198     00:39:53	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:04.198     00:39:53	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:04.198     00:39:53	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:04.198     00:39:53	-- accel/accel.sh@41 -- # local IFS=,
00:09:04.198     00:39:53	-- accel/accel.sh@42 -- # jq -r .
00:09:04.198  [2024-12-17 00:39:53.264045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:04.198  [2024-12-17 00:39:53.264112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957680 ]
00:09:04.198  EAL: No free 2048 kB hugepages reported on node 1
00:09:04.198  [2024-12-17 00:39:53.370310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:04.198  [2024-12-17 00:39:53.421076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:04.199  [2024-12-17 00:39:53.421162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:04.199  [2024-12-17 00:39:53.421277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:04.199  [2024-12-17 00:39:53.421277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=0xf
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=decompress
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@24 -- # accel_opc=decompress
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=software
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@23 -- # accel_module=software
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=32
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=32
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=1
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=Yes
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:04.458   00:39:53	-- accel/accel.sh@21 -- # val=
00:09:04.458   00:39:53	-- accel/accel.sh@22 -- # case "$var" in
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # IFS=:
00:09:04.458   00:39:53	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@21 -- # val=
00:09:05.396   00:39:54	-- accel/accel.sh@22 -- # case "$var" in
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # IFS=:
00:09:05.396   00:39:54	-- accel/accel.sh@20 -- # read -r var val
00:09:05.396   00:39:54	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:05.396   00:39:54	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:09:05.396   00:39:54	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:05.396  
00:09:05.396  real	0m2.779s
00:09:05.396  user	0m9.186s
00:09:05.396  sys	0m0.343s
00:09:05.396   00:39:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:05.396   00:39:54	-- common/autotest_common.sh@10 -- # set +x
00:09:05.396  ************************************
00:09:05.396  END TEST accel_decomp_mcore
00:09:05.396  ************************************
00:09:05.655   00:39:54	-- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:09:05.655   00:39:54	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:09:05.655   00:39:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:05.655   00:39:54	-- common/autotest_common.sh@10 -- # set +x
00:09:05.655  ************************************
00:09:05.655  START TEST accel_decomp_full_mcore
00:09:05.655  ************************************
00:09:05.655   00:39:54	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:09:05.655   00:39:54	-- accel/accel.sh@16 -- # local accel_opc
00:09:05.655   00:39:54	-- accel/accel.sh@17 -- # local accel_module
00:09:05.655    00:39:54	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:09:05.655    00:39:54	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:09:05.655     00:39:54	-- accel/accel.sh@12 -- # build_accel_config
00:09:05.655     00:39:54	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:05.655     00:39:54	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:05.655     00:39:54	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:05.655     00:39:54	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:05.655     00:39:54	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:05.655     00:39:54	-- accel/accel.sh@41 -- # local IFS=,
00:09:05.655     00:39:54	-- accel/accel.sh@42 -- # jq -r .
00:09:05.655  [2024-12-17 00:39:54.703028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:05.656  [2024-12-17 00:39:54.703124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957880 ]
00:09:05.656  EAL: No free 2048 kB hugepages reported on node 1
00:09:05.656  [2024-12-17 00:39:54.807894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:05.656  [2024-12-17 00:39:54.862318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:05.656  [2024-12-17 00:39:54.862404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:05.656  [2024-12-17 00:39:54.862509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:05.656  [2024-12-17 00:39:54.862510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:07.034   00:39:56	-- accel/accel.sh@18 -- # out='Preparing input file...
00:09:07.034  
00:09:07.034  SPDK Configuration:
00:09:07.034  Core mask:      0xf
00:09:07.034  
00:09:07.034  Accel Perf Configuration:
00:09:07.034  Workload Type:  decompress
00:09:07.034  Transfer size:  111250 bytes
00:09:07.034  Vector count    1
00:09:07.034  Module:         software
00:09:07.034  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:07.034  Queue depth:    32
00:09:07.034  Allocate depth: 32
00:09:07.034  # threads/core: 1
00:09:07.034  Run time:       1 seconds
00:09:07.034  Verify:         Yes
00:09:07.034  
00:09:07.034  Running for 1 seconds...
00:09:07.034  
00:09:07.034  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:07.034  ------------------------------------------------------------------------------------
00:09:07.034  0,0                        3776/s        155 MiB/s                0                0
00:09:07.034  3,0                        3776/s        155 MiB/s                0                0
00:09:07.034  2,0                        5504/s        227 MiB/s                0                0
00:09:07.034  1,0                        3776/s        155 MiB/s                0                0
00:09:07.034  ====================================================================================
00:09:07.034  Total                     16832/s       1785 MiB/s                0                0'
00:09:07.034   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.034   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.034    00:39:56	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:09:07.034    00:39:56	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:09:07.034     00:39:56	-- accel/accel.sh@12 -- # build_accel_config
00:09:07.034     00:39:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:07.034     00:39:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:07.034     00:39:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:07.034     00:39:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:07.034     00:39:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:07.034     00:39:56	-- accel/accel.sh@41 -- # local IFS=,
00:09:07.034     00:39:56	-- accel/accel.sh@42 -- # jq -r .
00:09:07.034  [2024-12-17 00:39:56.114955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:07.034  [2024-12-17 00:39:56.115042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958077 ]
00:09:07.034  EAL: No free 2048 kB hugepages reported on node 1
00:09:07.034  [2024-12-17 00:39:56.219376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:07.034  [2024-12-17 00:39:56.272554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:07.034  [2024-12-17 00:39:56.272638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:07.034  [2024-12-17 00:39:56.272747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:09:07.034  [2024-12-17 00:39:56.272748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:07.293   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.293   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.293   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.293   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.293   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.293   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.293   00:39:56	-- accel/accel.sh@21 -- # val=0xf
00:09:07.293   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.293   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.293   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.293   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.293   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.293   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=decompress
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@24 -- # accel_opc=decompress
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val='111250 bytes'
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=software
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@23 -- # accel_module=software
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=32
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=32
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=1
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=Yes
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:07.294   00:39:56	-- accel/accel.sh@21 -- # val=
00:09:07.294   00:39:56	-- accel/accel.sh@22 -- # case "$var" in
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # IFS=:
00:09:07.294   00:39:56	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@21 -- # val=
00:09:08.673   00:39:57	-- accel/accel.sh@22 -- # case "$var" in
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # IFS=:
00:09:08.673   00:39:57	-- accel/accel.sh@20 -- # read -r var val
00:09:08.673   00:39:57	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:08.673   00:39:57	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:09:08.673   00:39:57	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:08.673  
00:09:08.673  real	0m2.838s
00:09:08.673  user	0m9.342s
00:09:08.673  sys	0m0.358s
00:09:08.673   00:39:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:08.673   00:39:57	-- common/autotest_common.sh@10 -- # set +x
00:09:08.673  ************************************
00:09:08.673  END TEST accel_decomp_full_mcore
00:09:08.673  ************************************
00:09:08.673   00:39:57	-- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:09:08.673   00:39:57	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:09:08.673   00:39:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:08.673   00:39:57	-- common/autotest_common.sh@10 -- # set +x
00:09:08.673  ************************************
00:09:08.673  START TEST accel_decomp_mthread
00:09:08.673  ************************************
00:09:08.673   00:39:57	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:09:08.673   00:39:57	-- accel/accel.sh@16 -- # local accel_opc
00:09:08.673   00:39:57	-- accel/accel.sh@17 -- # local accel_module
00:09:08.673    00:39:57	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:09:08.673    00:39:57	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:09:08.673     00:39:57	-- accel/accel.sh@12 -- # build_accel_config
00:09:08.673     00:39:57	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:08.673     00:39:57	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:08.673     00:39:57	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:08.673     00:39:57	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:08.673     00:39:57	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:08.673     00:39:57	-- accel/accel.sh@41 -- # local IFS=,
00:09:08.673     00:39:57	-- accel/accel.sh@42 -- # jq -r .
00:09:08.673  [2024-12-17 00:39:57.583463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:08.673  [2024-12-17 00:39:57.583538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958330 ]
00:09:08.673  EAL: No free 2048 kB hugepages reported on node 1
00:09:08.673  [2024-12-17 00:39:57.688883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:08.673  [2024-12-17 00:39:57.738926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:10.052   00:39:58	-- accel/accel.sh@18 -- # out='Preparing input file...
00:09:10.052  
00:09:10.052  SPDK Configuration:
00:09:10.052  Core mask:      0x1
00:09:10.052  
00:09:10.052  Accel Perf Configuration:
00:09:10.052  Workload Type:  decompress
00:09:10.052  Transfer size:  4096 bytes
00:09:10.052  Vector count    1
00:09:10.052  Module:         software
00:09:10.052  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:10.052  Queue depth:    32
00:09:10.052  Allocate depth: 32
00:09:10.052  # threads/core: 2
00:09:10.052  Run time:       1 seconds
00:09:10.052  Verify:         Yes
00:09:10.052  
00:09:10.052  Running for 1 seconds...
00:09:10.052  
00:09:10.052  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:10.052  ------------------------------------------------------------------------------------
00:09:10.052  0,1                       28864/s         53 MiB/s                0                0
00:09:10.052  0,0                       28736/s         52 MiB/s                0                0
00:09:10.052  ====================================================================================
00:09:10.052  Total                     57600/s        225 MiB/s                0                0'
00:09:10.052   00:39:58	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:58	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052    00:39:58	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:09:10.052    00:39:58	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:09:10.052     00:39:58	-- accel/accel.sh@12 -- # build_accel_config
00:09:10.052     00:39:58	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:10.052     00:39:58	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:10.052     00:39:58	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:10.052     00:39:58	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:10.052     00:39:58	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:10.052     00:39:58	-- accel/accel.sh@41 -- # local IFS=,
00:09:10.052     00:39:58	-- accel/accel.sh@42 -- # jq -r .
00:09:10.052  [2024-12-17 00:39:58.981445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:10.052  [2024-12-17 00:39:58.981514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958530 ]
00:09:10.052  EAL: No free 2048 kB hugepages reported on node 1
00:09:10.052  [2024-12-17 00:39:59.088548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:10.052  [2024-12-17 00:39:59.138241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=0x1
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=decompress
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@24 -- # accel_opc=decompress
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val='4096 bytes'
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=software
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@23 -- # accel_module=software
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=32
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=32
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=2
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=Yes
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:10.052   00:39:59	-- accel/accel.sh@21 -- # val=
00:09:10.052   00:39:59	-- accel/accel.sh@22 -- # case "$var" in
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # IFS=:
00:09:10.052   00:39:59	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@21 -- # val=
00:09:11.432   00:40:00	-- accel/accel.sh@22 -- # case "$var" in
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # IFS=:
00:09:11.432   00:40:00	-- accel/accel.sh@20 -- # read -r var val
00:09:11.432   00:40:00	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:11.432   00:40:00	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:09:11.432   00:40:00	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:11.432  
00:09:11.432  real	0m2.805s
00:09:11.432  user	0m2.464s
00:09:11.432  sys	0m0.347s
00:09:11.432   00:40:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:11.432   00:40:00	-- common/autotest_common.sh@10 -- # set +x
00:09:11.432  ************************************
00:09:11.432  END TEST accel_decomp_mthread
00:09:11.432  ************************************
00:09:11.432   00:40:00	-- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:09:11.432   00:40:00	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:09:11.432   00:40:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:11.432   00:40:00	-- common/autotest_common.sh@10 -- # set +x
00:09:11.432  ************************************
00:09:11.432  START TEST accel_deomp_full_mthread
00:09:11.432  ************************************
00:09:11.432   00:40:00	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:09:11.432   00:40:00	-- accel/accel.sh@16 -- # local accel_opc
00:09:11.432   00:40:00	-- accel/accel.sh@17 -- # local accel_module
00:09:11.432    00:40:00	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:09:11.432    00:40:00	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:09:11.432     00:40:00	-- accel/accel.sh@12 -- # build_accel_config
00:09:11.432     00:40:00	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:11.432     00:40:00	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:11.432     00:40:00	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:11.432     00:40:00	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:11.432     00:40:00	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:11.433     00:40:00	-- accel/accel.sh@41 -- # local IFS=,
00:09:11.433     00:40:00	-- accel/accel.sh@42 -- # jq -r .
00:09:11.433  [2024-12-17 00:40:00.435112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:11.433  [2024-12-17 00:40:00.435183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958768 ]
00:09:11.433  EAL: No free 2048 kB hugepages reported on node 1
00:09:11.433  [2024-12-17 00:40:00.541916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:11.433  [2024-12-17 00:40:00.591830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:12.812   00:40:01	-- accel/accel.sh@18 -- # out='Preparing input file...
00:09:12.812  
00:09:12.812  SPDK Configuration:
00:09:12.812  Core mask:      0x1
00:09:12.812  
00:09:12.812  Accel Perf Configuration:
00:09:12.812  Workload Type:  decompress
00:09:12.812  Transfer size:  111250 bytes
00:09:12.812  Vector count    1
00:09:12.812  Module:         software
00:09:12.812  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:12.812  Queue depth:    32
00:09:12.812  Allocate depth: 32
00:09:12.812  # threads/core: 2
00:09:12.812  Run time:       1 seconds
00:09:12.812  Verify:         Yes
00:09:12.812  
00:09:12.812  Running for 1 seconds...
00:09:12.812  
00:09:12.812  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:09:12.812  ------------------------------------------------------------------------------------
00:09:12.812  0,1                        1952/s         80 MiB/s                0                0
00:09:12.812  0,0                        1920/s         79 MiB/s                0                0
00:09:12.812  ====================================================================================
00:09:12.812  Total                      3872/s        410 MiB/s                0                0'
00:09:12.812   00:40:01	-- accel/accel.sh@20 -- # IFS=:
00:09:12.812   00:40:01	-- accel/accel.sh@20 -- # read -r var val
00:09:12.812    00:40:01	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:09:12.812    00:40:01	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:09:12.812     00:40:01	-- accel/accel.sh@12 -- # build_accel_config
00:09:12.812     00:40:01	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:12.812     00:40:01	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:12.812     00:40:01	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:12.812     00:40:01	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:12.812     00:40:01	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:12.812     00:40:01	-- accel/accel.sh@41 -- # local IFS=,
00:09:12.812     00:40:01	-- accel/accel.sh@42 -- # jq -r .
00:09:12.812  [2024-12-17 00:40:01.866520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:12.812  [2024-12-17 00:40:01.866589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958978 ]
00:09:12.812  EAL: No free 2048 kB hugepages reported on node 1
00:09:12.812  [2024-12-17 00:40:01.972965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:12.812  [2024-12-17 00:40:02.022173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=0x1
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=decompress
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@24 -- # accel_opc=decompress
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val='111250 bytes'
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=software
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@23 -- # accel_module=software
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=32
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=32
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=2
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val='1 seconds'
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=Yes
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:13.072   00:40:02	-- accel/accel.sh@21 -- # val=
00:09:13.072   00:40:02	-- accel/accel.sh@22 -- # case "$var" in
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # IFS=:
00:09:13.072   00:40:02	-- accel/accel.sh@20 -- # read -r var val
00:09:14.010   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.010   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.010   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.010   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.010   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.010   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.010   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.010   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.010   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.010   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.010   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.010   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.270   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.270   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.270   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.270   00:40:03	-- accel/accel.sh@21 -- # val=
00:09:14.270   00:40:03	-- accel/accel.sh@22 -- # case "$var" in
00:09:14.270   00:40:03	-- accel/accel.sh@20 -- # IFS=:
00:09:14.270   00:40:03	-- accel/accel.sh@20 -- # read -r var val
00:09:14.270   00:40:03	-- accel/accel.sh@28 -- # [[ -n software ]]
00:09:14.270   00:40:03	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:09:14.270   00:40:03	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:09:14.270  
00:09:14.270  real	0m2.867s
00:09:14.270  user	0m2.530s
00:09:14.270  sys	0m0.340s
00:09:14.270   00:40:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:14.270   00:40:03	-- common/autotest_common.sh@10 -- # set +x
00:09:14.270  ************************************
00:09:14.270  END TEST accel_deomp_full_mthread
00:09:14.270  ************************************
00:09:14.270   00:40:03	-- accel/accel.sh@116 -- # [[ n == y ]]
00:09:14.270   00:40:03	-- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62
00:09:14.270    00:40:03	-- accel/accel.sh@129 -- # build_accel_config
00:09:14.270   00:40:03	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:09:14.270   00:40:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:14.270    00:40:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:09:14.270   00:40:03	-- common/autotest_common.sh@10 -- # set +x
00:09:14.270    00:40:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:09:14.271    00:40:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:09:14.271    00:40:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:09:14.271    00:40:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:09:14.271    00:40:03	-- accel/accel.sh@41 -- # local IFS=,
00:09:14.271    00:40:03	-- accel/accel.sh@42 -- # jq -r .
00:09:14.271  ************************************
00:09:14.271  START TEST accel_dif_functional_tests
00:09:14.271  ************************************
00:09:14.271   00:40:03	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62
00:09:14.271  [2024-12-17 00:40:03.372493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:14.271  [2024-12-17 00:40:03.372572] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959185 ]
00:09:14.271  EAL: No free 2048 kB hugepages reported on node 1
00:09:14.271  [2024-12-17 00:40:03.480993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:14.530  [2024-12-17 00:40:03.538829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:14.530  [2024-12-17 00:40:03.538927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:14.530  [2024-12-17 00:40:03.538931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:14.530  [2024-12-17 00:40:03.710568] 'OCF_Core' volume operations registered
00:09:14.530  [2024-12-17 00:40:03.712708] 'OCF_Cache' volume operations registered
00:09:14.530  [2024-12-17 00:40:03.715285] 'OCF Composite' volume operations registered
00:09:14.530  [2024-12-17 00:40:03.717457] 'SPDK_block_device' volume operations registered
00:09:14.530  
00:09:14.530  
00:09:14.530       CUnit - A unit testing framework for C - Version 2.1-3
00:09:14.530       http://cunit.sourceforge.net/
00:09:14.530  
00:09:14.530  
00:09:14.530  Suite: accel_dif
00:09:14.530    Test: verify: DIF generated, GUARD check ...passed
00:09:14.530    Test: verify: DIF generated, APPTAG check ...passed
00:09:14.530    Test: verify: DIF generated, REFTAG check ...passed
00:09:14.530    Test: verify: DIF not generated, GUARD check ...[2024-12-17 00:40:03.722998] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:09:14.530  [2024-12-17 00:40:03.723050] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:09:14.530  passed
00:09:14.530    Test: verify: DIF not generated, APPTAG check ...[2024-12-17 00:40:03.723089] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:09:14.530  [2024-12-17 00:40:03.723113] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:09:14.530  passed
00:09:14.530    Test: verify: DIF not generated, REFTAG check ...[2024-12-17 00:40:03.723140] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:09:14.530  [2024-12-17 00:40:03.723163] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:09:14.530  passed
00:09:14.530    Test: verify: APPTAG correct, APPTAG check ...passed
00:09:14.530    Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-17 00:40:03.723223] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30,  Expected=28, Actual=14
00:09:14.530  passed
00:09:14.530    Test: verify: APPTAG incorrect, no APPTAG check ...passed
00:09:14.530    Test: verify: REFTAG incorrect, REFTAG ignore ...passed
00:09:14.530    Test: verify: REFTAG_INIT correct, REFTAG check ...passed
00:09:14.530    Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-17 00:40:03.723371] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10
00:09:14.530  passed
00:09:14.530    Test: generate copy: DIF generated, GUARD check ...passed
00:09:14.530    Test: generate copy: DIF generated, APTTAG check ...passed
00:09:14.530    Test: generate copy: DIF generated, REFTAG check ...passed
00:09:14.530    Test: generate copy: DIF generated, no GUARD check flag set ...passed
00:09:14.530    Test: generate copy: DIF generated, no APPTAG check flag set ...passed
00:09:14.530    Test: generate copy: DIF generated, no REFTAG check flag set ...passed
00:09:14.530    Test: generate copy: iovecs-len validate ...[2024-12-17 00:40:03.723620] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size.
00:09:14.530  passed
00:09:14.530    Test: generate copy: buffer alignment validate ...passed
00:09:14.530  
00:09:14.530  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:09:14.530                suites      1      1    n/a      0        0
00:09:14.530                 tests     20     20     20      0        0
00:09:14.530               asserts    204    204    204      0      n/a
00:09:14.530  
00:09:14.530  Elapsed time =    0.002 seconds
00:09:15.099  
00:09:15.099  real	0m0.728s
00:09:15.099  user	0m1.291s
00:09:15.099  sys	0m0.293s
00:09:15.099   00:40:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:15.099   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.099  ************************************
00:09:15.099  END TEST accel_dif_functional_tests
00:09:15.099  ************************************
00:09:15.099  
00:09:15.099  real	1m0.348s
00:09:15.099  user	1m6.925s
00:09:15.099  sys	0m9.013s
00:09:15.099   00:40:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:15.099   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.099  ************************************
00:09:15.099  END TEST accel
00:09:15.099  ************************************
00:09:15.099   00:40:04	-- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh
00:09:15.099   00:40:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:15.099   00:40:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:15.099   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.099  ************************************
00:09:15.099  START TEST accel_rpc
00:09:15.099  ************************************
00:09:15.099   00:40:04	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh
00:09:15.099  * Looking for test storage...
00:09:15.099  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel
00:09:15.099    00:40:04	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:15.099     00:40:04	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:15.099     00:40:04	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:15.099    00:40:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:15.099    00:40:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:15.099    00:40:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:15.099    00:40:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:15.099    00:40:04	-- scripts/common.sh@335 -- # IFS=.-:
00:09:15.099    00:40:04	-- scripts/common.sh@335 -- # read -ra ver1
00:09:15.099    00:40:04	-- scripts/common.sh@336 -- # IFS=.-:
00:09:15.099    00:40:04	-- scripts/common.sh@336 -- # read -ra ver2
00:09:15.099    00:40:04	-- scripts/common.sh@337 -- # local 'op=<'
00:09:15.099    00:40:04	-- scripts/common.sh@339 -- # ver1_l=2
00:09:15.099    00:40:04	-- scripts/common.sh@340 -- # ver2_l=1
00:09:15.099    00:40:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:15.099    00:40:04	-- scripts/common.sh@343 -- # case "$op" in
00:09:15.099    00:40:04	-- scripts/common.sh@344 -- # : 1
00:09:15.099    00:40:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:15.099    00:40:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:15.099     00:40:04	-- scripts/common.sh@364 -- # decimal 1
00:09:15.099     00:40:04	-- scripts/common.sh@352 -- # local d=1
00:09:15.099     00:40:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:15.099     00:40:04	-- scripts/common.sh@354 -- # echo 1
00:09:15.099    00:40:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:15.099     00:40:04	-- scripts/common.sh@365 -- # decimal 2
00:09:15.099     00:40:04	-- scripts/common.sh@352 -- # local d=2
00:09:15.099     00:40:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:15.099     00:40:04	-- scripts/common.sh@354 -- # echo 2
00:09:15.099    00:40:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:15.099    00:40:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:15.099    00:40:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:15.099    00:40:04	-- scripts/common.sh@367 -- # return 0
00:09:15.099    00:40:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:15.099    00:40:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:15.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.099  		--rc genhtml_branch_coverage=1
00:09:15.099  		--rc genhtml_function_coverage=1
00:09:15.099  		--rc genhtml_legend=1
00:09:15.099  		--rc geninfo_all_blocks=1
00:09:15.099  		--rc geninfo_unexecuted_blocks=1
00:09:15.099  		
00:09:15.099  		'
00:09:15.099    00:40:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:15.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.099  		--rc genhtml_branch_coverage=1
00:09:15.099  		--rc genhtml_function_coverage=1
00:09:15.099  		--rc genhtml_legend=1
00:09:15.099  		--rc geninfo_all_blocks=1
00:09:15.099  		--rc geninfo_unexecuted_blocks=1
00:09:15.099  		
00:09:15.099  		'
00:09:15.099    00:40:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:15.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.099  		--rc genhtml_branch_coverage=1
00:09:15.099  		--rc genhtml_function_coverage=1
00:09:15.099  		--rc genhtml_legend=1
00:09:15.099  		--rc geninfo_all_blocks=1
00:09:15.099  		--rc geninfo_unexecuted_blocks=1
00:09:15.099  		
00:09:15.099  		'
00:09:15.099    00:40:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:15.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.099  		--rc genhtml_branch_coverage=1
00:09:15.099  		--rc genhtml_function_coverage=1
00:09:15.099  		--rc genhtml_legend=1
00:09:15.099  		--rc geninfo_all_blocks=1
00:09:15.099  		--rc geninfo_unexecuted_blocks=1
00:09:15.099  		
00:09:15.099  		'
00:09:15.099   00:40:04	-- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:15.099   00:40:04	-- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=959314
00:09:15.099   00:40:04	-- accel/accel_rpc.sh@15 -- # waitforlisten 959314
00:09:15.099   00:40:04	-- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:09:15.099   00:40:04	-- common/autotest_common.sh@829 -- # '[' -z 959314 ']'
00:09:15.099   00:40:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:15.099   00:40:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:15.099   00:40:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:15.099  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:15.099   00:40:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:15.099   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.358  [2024-12-17 00:40:04.405365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:15.358  [2024-12-17 00:40:04.405437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959314 ]
00:09:15.358  EAL: No free 2048 kB hugepages reported on node 1
00:09:15.358  [2024-12-17 00:40:04.512148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:15.358  [2024-12-17 00:40:04.563335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:15.358  [2024-12-17 00:40:04.563494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:15.358   00:40:04	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:15.358   00:40:04	-- common/autotest_common.sh@862 -- # return 0
00:09:15.358   00:40:04	-- accel/accel_rpc.sh@45 -- # [[ y == y ]]
00:09:15.358   00:40:04	-- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]]
00:09:15.358   00:40:04	-- accel/accel_rpc.sh@49 -- # [[ y == y ]]
00:09:15.358   00:40:04	-- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]]
00:09:15.358   00:40:04	-- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite
00:09:15.358   00:40:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:15.358   00:40:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:15.358   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.617  ************************************
00:09:15.617  START TEST accel_assign_opcode
00:09:15.617  ************************************
00:09:15.617   00:40:04	-- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite
00:09:15.617   00:40:04	-- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect
00:09:15.617   00:40:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.617   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.617  [2024-12-17 00:40:04.628061] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect
00:09:15.617   00:40:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.617   00:40:04	-- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software
00:09:15.617   00:40:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.617   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.617  [2024-12-17 00:40:04.636076] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software
00:09:15.617   00:40:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.617   00:40:04	-- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init
00:09:15.617   00:40:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.617   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.617  [2024-12-17 00:40:04.817949] 'OCF_Core' volume operations registered
00:09:15.617  [2024-12-17 00:40:04.820370] 'OCF_Cache' volume operations registered
00:09:15.617  [2024-12-17 00:40:04.823274] 'OCF Composite' volume operations registered
00:09:15.617  [2024-12-17 00:40:04.825724] 'SPDK_block_device' volume operations registered
00:09:15.877   00:40:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.877   00:40:04	-- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments
00:09:15.877   00:40:04	-- accel/accel_rpc.sh@42 -- # jq -r .copy
00:09:15.877   00:40:04	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:15.877   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.877   00:40:04	-- accel/accel_rpc.sh@42 -- # grep software
00:09:15.877   00:40:04	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:15.877  software
00:09:15.877  
00:09:15.877  real	0m0.368s
00:09:15.877  user	0m0.046s
00:09:15.877  sys	0m0.016s
00:09:15.877   00:40:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:15.877   00:40:04	-- common/autotest_common.sh@10 -- # set +x
00:09:15.877  ************************************
00:09:15.877  END TEST accel_assign_opcode
00:09:15.877  ************************************
00:09:15.877   00:40:05	-- accel/accel_rpc.sh@55 -- # killprocess 959314
00:09:15.877   00:40:05	-- common/autotest_common.sh@936 -- # '[' -z 959314 ']'
00:09:15.877   00:40:05	-- common/autotest_common.sh@940 -- # kill -0 959314
00:09:15.877    00:40:05	-- common/autotest_common.sh@941 -- # uname
00:09:15.877   00:40:05	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:15.877    00:40:05	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 959314
00:09:15.877   00:40:05	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:15.877   00:40:05	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:15.877   00:40:05	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 959314'
00:09:15.877  killing process with pid 959314
00:09:15.877   00:40:05	-- common/autotest_common.sh@955 -- # kill 959314
00:09:15.877   00:40:05	-- common/autotest_common.sh@960 -- # wait 959314
00:09:16.444  
00:09:16.444  real	0m1.438s
00:09:16.444  user	0m1.239s
00:09:16.444  sys	0m0.623s
00:09:16.444   00:40:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:16.444   00:40:05	-- common/autotest_common.sh@10 -- # set +x
00:09:16.444  ************************************
00:09:16.444  END TEST accel_rpc
00:09:16.444  ************************************
00:09:16.444   00:40:05	-- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh
00:09:16.444   00:40:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:16.444   00:40:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:16.444   00:40:05	-- common/autotest_common.sh@10 -- # set +x
00:09:16.444  ************************************
00:09:16.444  START TEST app_cmdline
00:09:16.444  ************************************
00:09:16.444   00:40:05	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh
00:09:16.703  * Looking for test storage...
00:09:16.703  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app
00:09:16.703    00:40:05	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:16.703     00:40:05	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:16.703     00:40:05	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:16.703    00:40:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:16.703    00:40:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:16.703    00:40:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:16.703    00:40:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:16.704    00:40:05	-- scripts/common.sh@335 -- # IFS=.-:
00:09:16.704    00:40:05	-- scripts/common.sh@335 -- # read -ra ver1
00:09:16.704    00:40:05	-- scripts/common.sh@336 -- # IFS=.-:
00:09:16.704    00:40:05	-- scripts/common.sh@336 -- # read -ra ver2
00:09:16.704    00:40:05	-- scripts/common.sh@337 -- # local 'op=<'
00:09:16.704    00:40:05	-- scripts/common.sh@339 -- # ver1_l=2
00:09:16.704    00:40:05	-- scripts/common.sh@340 -- # ver2_l=1
00:09:16.704    00:40:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:16.704    00:40:05	-- scripts/common.sh@343 -- # case "$op" in
00:09:16.704    00:40:05	-- scripts/common.sh@344 -- # : 1
00:09:16.704    00:40:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:16.704    00:40:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:16.704     00:40:05	-- scripts/common.sh@364 -- # decimal 1
00:09:16.704     00:40:05	-- scripts/common.sh@352 -- # local d=1
00:09:16.704     00:40:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:16.704     00:40:05	-- scripts/common.sh@354 -- # echo 1
00:09:16.704    00:40:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:16.704     00:40:05	-- scripts/common.sh@365 -- # decimal 2
00:09:16.704     00:40:05	-- scripts/common.sh@352 -- # local d=2
00:09:16.704     00:40:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:16.704     00:40:05	-- scripts/common.sh@354 -- # echo 2
00:09:16.704    00:40:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:16.704    00:40:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:16.704    00:40:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:16.704    00:40:05	-- scripts/common.sh@367 -- # return 0
00:09:16.704    00:40:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:16.704    00:40:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:16.704  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.704  		--rc genhtml_branch_coverage=1
00:09:16.704  		--rc genhtml_function_coverage=1
00:09:16.704  		--rc genhtml_legend=1
00:09:16.704  		--rc geninfo_all_blocks=1
00:09:16.704  		--rc geninfo_unexecuted_blocks=1
00:09:16.704  		
00:09:16.704  		'
00:09:16.704    00:40:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:16.704  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.704  		--rc genhtml_branch_coverage=1
00:09:16.704  		--rc genhtml_function_coverage=1
00:09:16.704  		--rc genhtml_legend=1
00:09:16.704  		--rc geninfo_all_blocks=1
00:09:16.704  		--rc geninfo_unexecuted_blocks=1
00:09:16.704  		
00:09:16.704  		'
00:09:16.704    00:40:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:16.704  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.704  		--rc genhtml_branch_coverage=1
00:09:16.704  		--rc genhtml_function_coverage=1
00:09:16.704  		--rc genhtml_legend=1
00:09:16.704  		--rc geninfo_all_blocks=1
00:09:16.704  		--rc geninfo_unexecuted_blocks=1
00:09:16.704  		
00:09:16.704  		'
00:09:16.704    00:40:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:16.704  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.704  		--rc genhtml_branch_coverage=1
00:09:16.704  		--rc genhtml_function_coverage=1
00:09:16.704  		--rc genhtml_legend=1
00:09:16.704  		--rc geninfo_all_blocks=1
00:09:16.704  		--rc geninfo_unexecuted_blocks=1
00:09:16.704  		
00:09:16.704  		'
00:09:16.704   00:40:05	-- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:09:16.704   00:40:05	-- app/cmdline.sh@17 -- # spdk_tgt_pid=959627
00:09:16.704   00:40:05	-- app/cmdline.sh@18 -- # waitforlisten 959627
00:09:16.704   00:40:05	-- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:09:16.704   00:40:05	-- common/autotest_common.sh@829 -- # '[' -z 959627 ']'
00:09:16.704   00:40:05	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:16.704   00:40:05	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:16.704   00:40:05	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:16.704  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:16.704   00:40:05	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:16.704   00:40:05	-- common/autotest_common.sh@10 -- # set +x
00:09:16.704  [2024-12-17 00:40:05.890813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:16.704  [2024-12-17 00:40:05.890902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959627 ]
00:09:16.704  EAL: No free 2048 kB hugepages reported on node 1
00:09:16.963  [2024-12-17 00:40:06.000321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:16.963  [2024-12-17 00:40:06.050943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:16.963  [2024-12-17 00:40:06.051097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.222  [2024-12-17 00:40:06.226665] 'OCF_Core' volume operations registered
00:09:17.222  [2024-12-17 00:40:06.228856] 'OCF_Cache' volume operations registered
00:09:17.222  [2024-12-17 00:40:06.231505] 'OCF Composite' volume operations registered
00:09:17.222  [2024-12-17 00:40:06.233729] 'SPDK_block_device' volume operations registered
00:09:17.790   00:40:06	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:17.790   00:40:06	-- common/autotest_common.sh@862 -- # return 0
00:09:17.790   00:40:06	-- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:09:18.049  {
00:09:18.049    "version": "SPDK v24.01.1-pre git sha1 c13c99a5e",
00:09:18.049    "fields": {
00:09:18.049      "major": 24,
00:09:18.049      "minor": 1,
00:09:18.049      "patch": 1,
00:09:18.049      "suffix": "-pre",
00:09:18.049      "commit": "c13c99a5e"
00:09:18.049    }
00:09:18.049  }
00:09:18.049   00:40:07	-- app/cmdline.sh@22 -- # expected_methods=()
00:09:18.049   00:40:07	-- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:09:18.049   00:40:07	-- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:09:18.049   00:40:07	-- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:09:18.049    00:40:07	-- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:09:18.049    00:40:07	-- app/cmdline.sh@26 -- # jq -r '.[]'
00:09:18.049    00:40:07	-- app/cmdline.sh@26 -- # sort
00:09:18.049    00:40:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:18.049    00:40:07	-- common/autotest_common.sh@10 -- # set +x
00:09:18.049    00:40:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:18.049   00:40:07	-- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:09:18.049   00:40:07	-- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:09:18.049   00:40:07	-- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:09:18.049   00:40:07	-- common/autotest_common.sh@650 -- # local es=0
00:09:18.049   00:40:07	-- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:09:18.049   00:40:07	-- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:09:18.049   00:40:07	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:18.049    00:40:07	-- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:09:18.049   00:40:07	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:18.049    00:40:07	-- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:09:18.049   00:40:07	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:09:18.049   00:40:07	-- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:09:18.049   00:40:07	-- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py ]]
00:09:18.049   00:40:07	-- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:09:18.308  request:
00:09:18.308  {
00:09:18.308    "method": "env_dpdk_get_mem_stats",
00:09:18.308    "req_id": 1
00:09:18.308  }
00:09:18.308  Got JSON-RPC error response
00:09:18.308  response:
00:09:18.308  {
00:09:18.308    "code": -32601,
00:09:18.308    "message": "Method not found"
00:09:18.308  }
00:09:18.308   00:40:07	-- common/autotest_common.sh@653 -- # es=1
00:09:18.308   00:40:07	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:09:18.308   00:40:07	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:09:18.308   00:40:07	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:09:18.308   00:40:07	-- app/cmdline.sh@1 -- # killprocess 959627
00:09:18.308   00:40:07	-- common/autotest_common.sh@936 -- # '[' -z 959627 ']'
00:09:18.308   00:40:07	-- common/autotest_common.sh@940 -- # kill -0 959627
00:09:18.308    00:40:07	-- common/autotest_common.sh@941 -- # uname
00:09:18.308   00:40:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:18.308    00:40:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 959627
00:09:18.308   00:40:07	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:18.308   00:40:07	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:18.308   00:40:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 959627'
00:09:18.308  killing process with pid 959627
00:09:18.308   00:40:07	-- common/autotest_common.sh@955 -- # kill 959627
00:09:18.308   00:40:07	-- common/autotest_common.sh@960 -- # wait 959627
00:09:18.876  
00:09:18.876  real	0m2.291s
00:09:18.876  user	0m2.654s
00:09:18.876  sys	0m0.727s
00:09:18.876   00:40:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:18.876   00:40:07	-- common/autotest_common.sh@10 -- # set +x
00:09:18.876  ************************************
00:09:18.876  END TEST app_cmdline
00:09:18.876  ************************************
00:09:18.876   00:40:07	-- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh
00:09:18.876   00:40:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:09:18.876   00:40:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:18.876   00:40:07	-- common/autotest_common.sh@10 -- # set +x
00:09:18.876  ************************************
00:09:18.876  START TEST version
00:09:18.876  ************************************
00:09:18.876   00:40:07	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh
00:09:18.876  * Looking for test storage...
00:09:18.876  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app
00:09:18.876    00:40:08	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:18.876     00:40:08	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:18.876     00:40:08	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:19.136    00:40:08	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:19.136    00:40:08	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:19.136    00:40:08	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:19.136    00:40:08	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:19.136    00:40:08	-- scripts/common.sh@335 -- # IFS=.-:
00:09:19.136    00:40:08	-- scripts/common.sh@335 -- # read -ra ver1
00:09:19.136    00:40:08	-- scripts/common.sh@336 -- # IFS=.-:
00:09:19.136    00:40:08	-- scripts/common.sh@336 -- # read -ra ver2
00:09:19.136    00:40:08	-- scripts/common.sh@337 -- # local 'op=<'
00:09:19.136    00:40:08	-- scripts/common.sh@339 -- # ver1_l=2
00:09:19.136    00:40:08	-- scripts/common.sh@340 -- # ver2_l=1
00:09:19.136    00:40:08	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:19.136    00:40:08	-- scripts/common.sh@343 -- # case "$op" in
00:09:19.136    00:40:08	-- scripts/common.sh@344 -- # : 1
00:09:19.136    00:40:08	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:19.136    00:40:08	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:19.136     00:40:08	-- scripts/common.sh@364 -- # decimal 1
00:09:19.136     00:40:08	-- scripts/common.sh@352 -- # local d=1
00:09:19.136     00:40:08	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:19.136     00:40:08	-- scripts/common.sh@354 -- # echo 1
00:09:19.136    00:40:08	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:19.136     00:40:08	-- scripts/common.sh@365 -- # decimal 2
00:09:19.136     00:40:08	-- scripts/common.sh@352 -- # local d=2
00:09:19.136     00:40:08	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:19.136     00:40:08	-- scripts/common.sh@354 -- # echo 2
00:09:19.136    00:40:08	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:19.136    00:40:08	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:19.136    00:40:08	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:19.136    00:40:08	-- scripts/common.sh@367 -- # return 0
00:09:19.136    00:40:08	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:19.136    00:40:08	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:19.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.136  		--rc genhtml_branch_coverage=1
00:09:19.136  		--rc genhtml_function_coverage=1
00:09:19.136  		--rc genhtml_legend=1
00:09:19.136  		--rc geninfo_all_blocks=1
00:09:19.136  		--rc geninfo_unexecuted_blocks=1
00:09:19.136  		
00:09:19.136  		'
00:09:19.136    00:40:08	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:19.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.136  		--rc genhtml_branch_coverage=1
00:09:19.136  		--rc genhtml_function_coverage=1
00:09:19.136  		--rc genhtml_legend=1
00:09:19.136  		--rc geninfo_all_blocks=1
00:09:19.136  		--rc geninfo_unexecuted_blocks=1
00:09:19.136  		
00:09:19.136  		'
00:09:19.136    00:40:08	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:19.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.136  		--rc genhtml_branch_coverage=1
00:09:19.136  		--rc genhtml_function_coverage=1
00:09:19.136  		--rc genhtml_legend=1
00:09:19.136  		--rc geninfo_all_blocks=1
00:09:19.136  		--rc geninfo_unexecuted_blocks=1
00:09:19.136  		
00:09:19.136  		'
00:09:19.136    00:40:08	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:19.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.136  		--rc genhtml_branch_coverage=1
00:09:19.136  		--rc genhtml_function_coverage=1
00:09:19.136  		--rc genhtml_legend=1
00:09:19.136  		--rc geninfo_all_blocks=1
00:09:19.136  		--rc geninfo_unexecuted_blocks=1
00:09:19.136  		
00:09:19.136  		'
00:09:19.136    00:40:08	-- app/version.sh@17 -- # get_header_version major
00:09:19.136    00:40:08	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:09:19.136    00:40:08	-- app/version.sh@14 -- # cut -f2
00:09:19.136    00:40:08	-- app/version.sh@14 -- # tr -d '"'
00:09:19.136   00:40:08	-- app/version.sh@17 -- # major=24
00:09:19.136    00:40:08	-- app/version.sh@18 -- # get_header_version minor
00:09:19.136    00:40:08	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:09:19.136    00:40:08	-- app/version.sh@14 -- # cut -f2
00:09:19.136    00:40:08	-- app/version.sh@14 -- # tr -d '"'
00:09:19.136   00:40:08	-- app/version.sh@18 -- # minor=1
00:09:19.136    00:40:08	-- app/version.sh@19 -- # get_header_version patch
00:09:19.136    00:40:08	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:09:19.136    00:40:08	-- app/version.sh@14 -- # cut -f2
00:09:19.136    00:40:08	-- app/version.sh@14 -- # tr -d '"'
00:09:19.136   00:40:08	-- app/version.sh@19 -- # patch=1
00:09:19.136    00:40:08	-- app/version.sh@20 -- # get_header_version suffix
00:09:19.136    00:40:08	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:09:19.136    00:40:08	-- app/version.sh@14 -- # cut -f2
00:09:19.136    00:40:08	-- app/version.sh@14 -- # tr -d '"'
00:09:19.136   00:40:08	-- app/version.sh@20 -- # suffix=-pre
00:09:19.136   00:40:08	-- app/version.sh@22 -- # version=24.1
00:09:19.136   00:40:08	-- app/version.sh@25 -- # (( patch != 0 ))
00:09:19.136   00:40:08	-- app/version.sh@25 -- # version=24.1.1
00:09:19.136   00:40:08	-- app/version.sh@28 -- # version=24.1.1rc0
00:09:19.136   00:40:08	-- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python
00:09:19.136    00:40:08	-- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:09:19.136   00:40:08	-- app/version.sh@30 -- # py_version=24.1.1rc0
00:09:19.136   00:40:08	-- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]]
00:09:19.136  
00:09:19.136  real	0m0.293s
00:09:19.136  user	0m0.169s
00:09:19.136  sys	0m0.180s
00:09:19.136   00:40:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:19.136   00:40:08	-- common/autotest_common.sh@10 -- # set +x
00:09:19.136  ************************************
00:09:19.136  END TEST version
00:09:19.136  ************************************
00:09:19.136   00:40:08	-- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']'
00:09:19.136    00:40:08	-- spdk/autotest.sh@191 -- # uname -s
00:09:19.136   00:40:08	-- spdk/autotest.sh@191 -- # [[ Linux == Linux ]]
00:09:19.136   00:40:08	-- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]]
00:09:19.136   00:40:08	-- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]]
00:09:19.136   00:40:08	-- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']'
00:09:19.136   00:40:08	-- spdk/autotest.sh@205 -- # run_test blockdev_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme
00:09:19.136   00:40:08	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:09:19.136   00:40:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:19.136   00:40:08	-- common/autotest_common.sh@10 -- # set +x
00:09:19.136  ************************************
00:09:19.136  START TEST blockdev_nvme
00:09:19.136  ************************************
00:09:19.136   00:40:08	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme
00:09:19.395  * Looking for test storage...
00:09:19.395  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev
00:09:19.395    00:40:08	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:19.396     00:40:08	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:19.396     00:40:08	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:19.396    00:40:08	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:19.396    00:40:08	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:19.396    00:40:08	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:19.396    00:40:08	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:19.396    00:40:08	-- scripts/common.sh@335 -- # IFS=.-:
00:09:19.396    00:40:08	-- scripts/common.sh@335 -- # read -ra ver1
00:09:19.396    00:40:08	-- scripts/common.sh@336 -- # IFS=.-:
00:09:19.396    00:40:08	-- scripts/common.sh@336 -- # read -ra ver2
00:09:19.396    00:40:08	-- scripts/common.sh@337 -- # local 'op=<'
00:09:19.396    00:40:08	-- scripts/common.sh@339 -- # ver1_l=2
00:09:19.396    00:40:08	-- scripts/common.sh@340 -- # ver2_l=1
00:09:19.396    00:40:08	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:19.396    00:40:08	-- scripts/common.sh@343 -- # case "$op" in
00:09:19.396    00:40:08	-- scripts/common.sh@344 -- # : 1
00:09:19.396    00:40:08	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:19.396    00:40:08	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:19.396     00:40:08	-- scripts/common.sh@364 -- # decimal 1
00:09:19.396     00:40:08	-- scripts/common.sh@352 -- # local d=1
00:09:19.396     00:40:08	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:19.396     00:40:08	-- scripts/common.sh@354 -- # echo 1
00:09:19.396    00:40:08	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:19.396     00:40:08	-- scripts/common.sh@365 -- # decimal 2
00:09:19.396     00:40:08	-- scripts/common.sh@352 -- # local d=2
00:09:19.396     00:40:08	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:19.396     00:40:08	-- scripts/common.sh@354 -- # echo 2
00:09:19.396    00:40:08	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:19.396    00:40:08	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:19.396    00:40:08	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:19.396    00:40:08	-- scripts/common.sh@367 -- # return 0
00:09:19.396    00:40:08	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:19.396    00:40:08	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:19.396  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.396  		--rc genhtml_branch_coverage=1
00:09:19.396  		--rc genhtml_function_coverage=1
00:09:19.396  		--rc genhtml_legend=1
00:09:19.396  		--rc geninfo_all_blocks=1
00:09:19.396  		--rc geninfo_unexecuted_blocks=1
00:09:19.396  		
00:09:19.396  		'
00:09:19.396    00:40:08	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:19.396  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.396  		--rc genhtml_branch_coverage=1
00:09:19.396  		--rc genhtml_function_coverage=1
00:09:19.396  		--rc genhtml_legend=1
00:09:19.396  		--rc geninfo_all_blocks=1
00:09:19.396  		--rc geninfo_unexecuted_blocks=1
00:09:19.396  		
00:09:19.396  		'
00:09:19.396    00:40:08	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:19.396  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.396  		--rc genhtml_branch_coverage=1
00:09:19.396  		--rc genhtml_function_coverage=1
00:09:19.396  		--rc genhtml_legend=1
00:09:19.396  		--rc geninfo_all_blocks=1
00:09:19.396  		--rc geninfo_unexecuted_blocks=1
00:09:19.396  		
00:09:19.396  		'
00:09:19.396    00:40:08	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:19.396  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.396  		--rc genhtml_branch_coverage=1
00:09:19.396  		--rc genhtml_function_coverage=1
00:09:19.396  		--rc genhtml_legend=1
00:09:19.396  		--rc geninfo_all_blocks=1
00:09:19.396  		--rc geninfo_unexecuted_blocks=1
00:09:19.396  		
00:09:19.396  		'
00:09:19.396   00:40:08	-- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh
00:09:19.396    00:40:08	-- bdev/nbd_common.sh@6 -- # set -e
00:09:19.396   00:40:08	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:09:19.396   00:40:08	-- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:09:19.396   00:40:08	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json
00:09:19.396   00:40:08	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json
00:09:19.396   00:40:08	-- bdev/blockdev.sh@18 -- # :
00:09:19.396   00:40:08	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:09:19.396   00:40:08	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:09:19.396   00:40:08	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:09:19.396    00:40:08	-- bdev/blockdev.sh@672 -- # uname -s
00:09:19.396   00:40:08	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:09:19.396   00:40:08	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:09:19.396   00:40:08	-- bdev/blockdev.sh@680 -- # test_type=nvme
00:09:19.396   00:40:08	-- bdev/blockdev.sh@681 -- # crypto_device=
00:09:19.396   00:40:08	-- bdev/blockdev.sh@682 -- # dek=
00:09:19.396   00:40:08	-- bdev/blockdev.sh@683 -- # env_ctx=
00:09:19.396   00:40:08	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:09:19.396   00:40:08	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:09:19.396   00:40:08	-- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]]
00:09:19.396   00:40:08	-- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]]
00:09:19.396   00:40:08	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:09:19.396   00:40:08	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=960112
00:09:19.396   00:40:08	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:09:19.396   00:40:08	-- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' ''
00:09:19.396   00:40:08	-- bdev/blockdev.sh@47 -- # waitforlisten 960112
00:09:19.396   00:40:08	-- common/autotest_common.sh@829 -- # '[' -z 960112 ']'
00:09:19.396   00:40:08	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:19.396   00:40:08	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:19.396   00:40:08	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:19.396  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:19.396   00:40:08	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:19.396   00:40:08	-- common/autotest_common.sh@10 -- # set +x
00:09:19.396  [2024-12-17 00:40:08.602932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:19.396  [2024-12-17 00:40:08.603014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960112 ]
00:09:19.396  EAL: No free 2048 kB hugepages reported on node 1
00:09:19.656  [2024-12-17 00:40:08.710525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:19.656  [2024-12-17 00:40:08.756646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:19.656  [2024-12-17 00:40:08.756812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:19.656  [2024-12-17 00:40:08.914317] 'OCF_Core' volume operations registered
00:09:19.656  [2024-12-17 00:40:08.916480] 'OCF_Cache' volume operations registered
00:09:19.915  [2024-12-17 00:40:08.919036] 'OCF Composite' volume operations registered
00:09:19.915  [2024-12-17 00:40:08.921226] 'SPDK_block_device' volume operations registered
00:09:20.498   00:40:09	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:20.498   00:40:09	-- common/autotest_common.sh@862 -- # return 0
00:09:20.498   00:40:09	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:09:20.498   00:40:09	-- bdev/blockdev.sh@697 -- # setup_nvme_conf
00:09:20.498   00:40:09	-- bdev/blockdev.sh@79 -- # local json
00:09:20.498   00:40:09	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:09:20.498    00:40:09	-- bdev/blockdev.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:20.498   00:40:09	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:5e:00.0" } } ] }'\'''
00:09:20.498   00:40:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:20.498   00:40:09	-- common/autotest_common.sh@10 -- # set +x
00:09:23.790   00:40:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:23.790   00:40:12	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:09:23.790   00:40:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:23.790   00:40:12	-- common/autotest_common.sh@10 -- # set +x
00:09:23.790   00:40:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:23.790   00:40:12	-- bdev/blockdev.sh@738 -- # cat
00:09:23.790    00:40:12	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:09:23.790    00:40:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:23.790    00:40:12	-- common/autotest_common.sh@10 -- # set +x
00:09:23.790    00:40:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:23.790    00:40:12	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:09:23.790    00:40:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:23.790    00:40:12	-- common/autotest_common.sh@10 -- # set +x
00:09:23.790    00:40:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:23.790    00:40:12	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:09:23.790    00:40:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:23.790    00:40:12	-- common/autotest_common.sh@10 -- # set +x
00:09:23.790    00:40:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:23.790   00:40:12	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:09:23.790    00:40:12	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:09:23.790    00:40:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:23.790    00:40:12	-- common/autotest_common.sh@10 -- # set +x
00:09:23.790    00:40:12	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:09:23.790    00:40:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:23.790   00:40:12	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:09:23.790    00:40:12	-- bdev/blockdev.sh@747 -- # jq -r .name
00:09:23.790    00:40:12	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "851255c8-444f-499b-b050-c68d239378e5"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 512,' '  "num_blocks": 7814037168,' '  "uuid": "851255c8-444f-499b-b050-c68d239378e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": true,' '    "nvme_io": true' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:5e:00.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:5e:00.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x8086",' '          "model_number": "INTEL SSDPE2KX040T8",' '          "serial_number": "BTLJ83030AK84P0DGN",' '          "firmware_revision": "VDV10184",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 1,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.2"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:09:23.790   00:40:12	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:09:23.790   00:40:12	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1
00:09:23.790   00:40:12	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:09:23.790   00:40:12	-- bdev/blockdev.sh@752 -- # killprocess 960112
00:09:23.790   00:40:12	-- common/autotest_common.sh@936 -- # '[' -z 960112 ']'
00:09:23.790   00:40:12	-- common/autotest_common.sh@940 -- # kill -0 960112
00:09:23.790    00:40:12	-- common/autotest_common.sh@941 -- # uname
00:09:23.790   00:40:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:23.790    00:40:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 960112
00:09:23.790   00:40:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:23.790   00:40:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:23.790   00:40:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 960112'
00:09:23.790  killing process with pid 960112
00:09:23.790   00:40:12	-- common/autotest_common.sh@955 -- # kill 960112
00:09:23.790   00:40:12	-- common/autotest_common.sh@960 -- # wait 960112
00:09:27.982   00:40:16	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:27.982   00:40:16	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:09:27.982   00:40:16	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:27.982   00:40:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:27.982   00:40:16	-- common/autotest_common.sh@10 -- # set +x
00:09:27.982  ************************************
00:09:27.982  START TEST bdev_hello_world
00:09:27.982  ************************************
00:09:27.982   00:40:16	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:09:27.982  [2024-12-17 00:40:16.844319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:27.982  [2024-12-17 00:40:16.844387] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961298 ]
00:09:27.982  EAL: No free 2048 kB hugepages reported on node 1
00:09:27.982  [2024-12-17 00:40:16.947715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:27.983  [2024-12-17 00:40:16.997478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:27.983  [2024-12-17 00:40:17.235772] 'OCF_Core' volume operations registered
00:09:27.983  [2024-12-17 00:40:17.238211] 'OCF_Cache' volume operations registered
00:09:27.983  [2024-12-17 00:40:17.241150] 'OCF Composite' volume operations registered
00:09:27.983  [2024-12-17 00:40:17.243606] 'SPDK_block_device' volume operations registered
00:09:31.273  [2024-12-17 00:40:20.105845] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:09:31.273  [2024-12-17 00:40:20.105884] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:09:31.273  [2024-12-17 00:40:20.105912] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:09:31.273  [2024-12-17 00:40:20.108114] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:09:31.273  [2024-12-17 00:40:20.108293] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:09:31.273  [2024-12-17 00:40:20.108311] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:09:31.273  [2024-12-17 00:40:20.109047] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:09:31.273  
00:09:31.273  [2024-12-17 00:40:20.109074] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:09:35.466  
00:09:35.466  real	0m7.304s
00:09:35.466  user	0m6.183s
00:09:35.466  sys	0m0.365s
00:09:35.466   00:40:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:35.466   00:40:24	-- common/autotest_common.sh@10 -- # set +x
00:09:35.466  ************************************
00:09:35.466  END TEST bdev_hello_world
00:09:35.466  ************************************
00:09:35.466   00:40:24	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:09:35.466   00:40:24	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:09:35.466   00:40:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:35.466   00:40:24	-- common/autotest_common.sh@10 -- # set +x
00:09:35.466  ************************************
00:09:35.466  START TEST bdev_bounds
00:09:35.466  ************************************
00:09:35.466   00:40:24	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:09:35.466   00:40:24	-- bdev/blockdev.sh@288 -- # bdevio_pid=962220
00:09:35.466   00:40:24	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:09:35.466   00:40:24	-- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:09:35.466   00:40:24	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 962220'
00:09:35.466  Process bdevio pid: 962220
00:09:35.466   00:40:24	-- bdev/blockdev.sh@291 -- # waitforlisten 962220
00:09:35.466   00:40:24	-- common/autotest_common.sh@829 -- # '[' -z 962220 ']'
00:09:35.466   00:40:24	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:35.466   00:40:24	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:35.466   00:40:24	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:35.466  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:35.466   00:40:24	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:35.466   00:40:24	-- common/autotest_common.sh@10 -- # set +x
00:09:35.466  [2024-12-17 00:40:24.195668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:35.466  [2024-12-17 00:40:24.195741] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962220 ]
00:09:35.466  EAL: No free 2048 kB hugepages reported on node 1
00:09:35.466  [2024-12-17 00:40:24.299825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:35.466  [2024-12-17 00:40:24.352332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:35.466  [2024-12-17 00:40:24.352417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:35.466  [2024-12-17 00:40:24.352421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:35.466  [2024-12-17 00:40:24.585335] 'OCF_Core' volume operations registered
00:09:35.466  [2024-12-17 00:40:24.587768] 'OCF_Cache' volume operations registered
00:09:35.466  [2024-12-17 00:40:24.590678] 'OCF Composite' volume operations registered
00:09:35.466  [2024-12-17 00:40:24.593140] 'SPDK_block_device' volume operations registered
00:09:39.661   00:40:28	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:39.661   00:40:28	-- common/autotest_common.sh@862 -- # return 0
00:09:39.661   00:40:28	-- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests
00:09:39.661  I/O targets:
00:09:39.661    Nvme0n1: 7814037168 blocks of 512 bytes (3815448 MiB)
00:09:39.661  
00:09:39.661  
00:09:39.661       CUnit - A unit testing framework for C - Version 2.1-3
00:09:39.661       http://cunit.sourceforge.net/
00:09:39.661  
00:09:39.661  
00:09:39.661  Suite: bdevio tests on: Nvme0n1
00:09:39.661    Test: blockdev write read block ...passed
00:09:39.661    Test: blockdev write zeroes read block ...passed
00:09:39.661    Test: blockdev write zeroes read no split ...passed
00:09:39.661    Test: blockdev write zeroes read split ...passed
00:09:39.661    Test: blockdev write zeroes read split partial ...passed
00:09:39.661    Test: blockdev reset ...[2024-12-17 00:40:28.312564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:09:39.661  [2024-12-17 00:40:28.315005] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:09:39.661  passed
00:09:39.661    Test: blockdev write read 8 blocks ...passed
00:09:39.661    Test: blockdev write read size > 128k ...passed
00:09:39.661    Test: blockdev write read invalid size ...passed
00:09:39.661    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:39.661    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:39.661    Test: blockdev write read max offset ...passed
00:09:39.661    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:39.661    Test: blockdev writev readv 8 blocks ...passed
00:09:39.661    Test: blockdev writev readv 30 x 1block ...passed
00:09:39.661    Test: blockdev writev readv block ...passed
00:09:39.661    Test: blockdev writev readv size > 128k ...passed
00:09:39.661    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:39.661    Test: blockdev comparev and writev ...passed
00:09:39.661    Test: blockdev nvme passthru rw ...passed
00:09:39.661    Test: blockdev nvme passthru vendor specific ...[2024-12-17 00:40:28.335669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:895 PRP1 0x0 PRP2 0x0
00:09:39.661  [2024-12-17 00:40:28.335697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:895 cdw0:0 sqhd:0056 p:1 m:0 dnr:1
00:09:39.661  passed
00:09:39.661    Test: blockdev nvme admin passthru ...passed
00:09:39.661    Test: blockdev copy ...passed
00:09:39.661  
00:09:39.661  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:09:39.661                suites      1      1    n/a      0        0
00:09:39.661                 tests     23     23     23      0        0
00:09:39.661               asserts    140    140    140      0      n/a
00:09:39.661  
00:09:39.661  Elapsed time =    0.131 seconds
00:09:39.661  0
00:09:39.661   00:40:28	-- bdev/blockdev.sh@293 -- # killprocess 962220
00:09:39.661   00:40:28	-- common/autotest_common.sh@936 -- # '[' -z 962220 ']'
00:09:39.661   00:40:28	-- common/autotest_common.sh@940 -- # kill -0 962220
00:09:39.661    00:40:28	-- common/autotest_common.sh@941 -- # uname
00:09:39.661   00:40:28	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:39.661    00:40:28	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 962220
00:09:39.661   00:40:28	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:39.661   00:40:28	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:39.661   00:40:28	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 962220'
00:09:39.661  killing process with pid 962220
00:09:39.661   00:40:28	-- common/autotest_common.sh@955 -- # kill 962220
00:09:39.661   00:40:28	-- common/autotest_common.sh@960 -- # wait 962220
00:09:43.855   00:40:32	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:09:43.855  
00:09:43.855  real	0m8.289s
00:09:43.855  user	0m24.416s
00:09:43.855  sys	0m0.684s
00:09:43.855   00:40:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:43.855   00:40:32	-- common/autotest_common.sh@10 -- # set +x
00:09:43.855  ************************************
00:09:43.855  END TEST bdev_bounds
00:09:43.855  ************************************
00:09:43.855   00:40:32	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 ''
00:09:43.855   00:40:32	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:09:43.855   00:40:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:43.855   00:40:32	-- common/autotest_common.sh@10 -- # set +x
00:09:43.855  ************************************
00:09:43.855  START TEST bdev_nbd
00:09:43.855  ************************************
00:09:43.855   00:40:32	-- common/autotest_common.sh@1114 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 ''
00:09:43.855    00:40:32	-- bdev/blockdev.sh@298 -- # uname -s
00:09:43.855   00:40:32	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:09:43.855   00:40:32	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:43.855   00:40:32	-- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:09:43.855   00:40:32	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1')
00:09:43.855   00:40:32	-- bdev/blockdev.sh@302 -- # local bdev_all
00:09:43.855   00:40:32	-- bdev/blockdev.sh@303 -- # local bdev_num=1
00:09:43.855   00:40:32	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:09:43.855   00:40:32	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:09:43.855   00:40:32	-- bdev/blockdev.sh@309 -- # local nbd_all
00:09:43.855   00:40:32	-- bdev/blockdev.sh@310 -- # bdev_num=1
00:09:43.855   00:40:32	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0')
00:09:43.855   00:40:32	-- bdev/blockdev.sh@312 -- # local nbd_list
00:09:43.856   00:40:32	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1')
00:09:43.856   00:40:32	-- bdev/blockdev.sh@313 -- # local bdev_list
00:09:43.856   00:40:32	-- bdev/blockdev.sh@316 -- # nbd_pid=963335
00:09:43.856   00:40:32	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:09:43.856   00:40:32	-- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:09:43.856   00:40:32	-- bdev/blockdev.sh@318 -- # waitforlisten 963335 /var/tmp/spdk-nbd.sock
00:09:43.856   00:40:32	-- common/autotest_common.sh@829 -- # '[' -z 963335 ']'
00:09:43.856   00:40:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:43.856   00:40:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:43.856   00:40:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:43.856  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:43.856   00:40:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:43.856   00:40:32	-- common/autotest_common.sh@10 -- # set +x
00:09:43.856  [2024-12-17 00:40:32.545505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:43.856  [2024-12-17 00:40:32.545576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:43.856  EAL: No free 2048 kB hugepages reported on node 1
00:09:43.856  [2024-12-17 00:40:32.642205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:43.856  [2024-12-17 00:40:32.688291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.856  [2024-12-17 00:40:32.901028] 'OCF_Core' volume operations registered
00:09:43.856  [2024-12-17 00:40:32.903489] 'OCF_Cache' volume operations registered
00:09:43.856  [2024-12-17 00:40:32.906432] 'OCF Composite' volume operations registered
00:09:43.856  [2024-12-17 00:40:32.908916] 'SPDK_block_device' volume operations registered
00:09:48.049   00:40:36	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:48.049   00:40:36	-- common/autotest_common.sh@862 -- # return 0
00:09:48.049   00:40:36	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@24 -- # local i
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:09:48.049    00:40:36	-- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:09:48.049    00:40:36	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:09:48.049   00:40:36	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:09:48.049   00:40:36	-- common/autotest_common.sh@867 -- # local i
00:09:48.049   00:40:36	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:48.049   00:40:36	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:48.049   00:40:36	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:09:48.049   00:40:36	-- common/autotest_common.sh@871 -- # break
00:09:48.049   00:40:36	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:48.049   00:40:36	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:48.049   00:40:36	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:48.049  1+0 records in
00:09:48.049  1+0 records out
00:09:48.049  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272567 s, 15.0 MB/s
00:09:48.049    00:40:36	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:09:48.049   00:40:36	-- common/autotest_common.sh@884 -- # size=4096
00:09:48.049   00:40:36	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:09:48.049   00:40:36	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:48.049   00:40:36	-- common/autotest_common.sh@887 -- # return 0
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:48.049   00:40:36	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:09:48.049    00:40:36	-- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:09:48.049    {
00:09:48.049      "nbd_device": "/dev/nbd0",
00:09:48.049      "bdev_name": "Nvme0n1"
00:09:48.049    }
00:09:48.049  ]'
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:09:48.049    00:40:37	-- bdev/nbd_common.sh@119 -- # echo '[
00:09:48.049    {
00:09:48.049      "nbd_device": "/dev/nbd0",
00:09:48.049      "bdev_name": "Nvme0n1"
00:09:48.049    }
00:09:48.049  ]'
00:09:48.049    00:40:37	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@51 -- # local i
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:48.049   00:40:37	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:48.309    00:40:37	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@41 -- # break
00:09:48.309   00:40:37	-- bdev/nbd_common.sh@45 -- # return 0
00:09:48.309    00:40:37	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:48.309    00:40:37	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.309     00:40:37	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:48.568    00:40:37	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:48.568     00:40:37	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:48.568     00:40:37	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:48.568    00:40:37	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:48.568     00:40:37	-- bdev/nbd_common.sh@65 -- # echo ''
00:09:48.568     00:40:37	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:48.568     00:40:37	-- bdev/nbd_common.sh@65 -- # true
00:09:48.568    00:40:37	-- bdev/nbd_common.sh@65 -- # count=0
00:09:48.568    00:40:37	-- bdev/nbd_common.sh@66 -- # echo 0
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@122 -- # count=0
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@127 -- # return 0
00:09:48.568   00:40:37	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@12 -- # local i
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:09:48.568   00:40:37	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:09:48.827  /dev/nbd0
00:09:48.827    00:40:37	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:48.827   00:40:37	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:48.827   00:40:37	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:09:48.827   00:40:37	-- common/autotest_common.sh@867 -- # local i
00:09:48.827   00:40:37	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:09:48.827   00:40:37	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:09:48.827   00:40:37	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:09:48.827   00:40:37	-- common/autotest_common.sh@871 -- # break
00:09:48.827   00:40:37	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:09:48.827   00:40:37	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:09:48.827   00:40:37	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:48.827  1+0 records in
00:09:48.827  1+0 records out
00:09:48.827  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307617 s, 13.3 MB/s
00:09:48.827    00:40:37	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:09:48.827   00:40:37	-- common/autotest_common.sh@884 -- # size=4096
00:09:48.827   00:40:37	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:09:48.827   00:40:37	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:09:48.827   00:40:37	-- common/autotest_common.sh@887 -- # return 0
00:09:48.827   00:40:37	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:48.827   00:40:37	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:09:48.827    00:40:37	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:48.827    00:40:37	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:48.827     00:40:37	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:49.087    00:40:38	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:49.087    {
00:09:49.087      "nbd_device": "/dev/nbd0",
00:09:49.087      "bdev_name": "Nvme0n1"
00:09:49.087    }
00:09:49.087  ]'
00:09:49.087     00:40:38	-- bdev/nbd_common.sh@64 -- # echo '[
00:09:49.087    {
00:09:49.087      "nbd_device": "/dev/nbd0",
00:09:49.087      "bdev_name": "Nvme0n1"
00:09:49.087    }
00:09:49.087  ]'
00:09:49.087     00:40:38	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:49.087    00:40:38	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:09:49.087     00:40:38	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:09:49.087     00:40:38	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:49.087    00:40:38	-- bdev/nbd_common.sh@65 -- # count=1
00:09:49.087    00:40:38	-- bdev/nbd_common.sh@66 -- # echo 1
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@95 -- # count=1
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@71 -- # local operation=write
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:09:49.087  256+0 records in
00:09:49.087  256+0 records out
00:09:49.087  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103921 s, 101 MB/s
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:49.087  256+0 records in
00:09:49.087  256+0 records out
00:09:49.087  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231725 s, 45.3 MB/s
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:49.087   00:40:38	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@51 -- # local i
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:49.346   00:40:38	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:49.605    00:40:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@41 -- # break
00:09:49.605   00:40:38	-- bdev/nbd_common.sh@45 -- # return 0
00:09:49.605    00:40:38	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:49.605    00:40:38	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.605     00:40:38	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:49.864    00:40:38	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:49.864     00:40:38	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:49.864     00:40:38	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:49.864    00:40:38	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:49.864     00:40:38	-- bdev/nbd_common.sh@65 -- # echo ''
00:09:49.864     00:40:38	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:49.864     00:40:38	-- bdev/nbd_common.sh@65 -- # true
00:09:49.864    00:40:38	-- bdev/nbd_common.sh@65 -- # count=0
00:09:49.864    00:40:38	-- bdev/nbd_common.sh@66 -- # echo 0
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@104 -- # count=0
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@109 -- # return 0
00:09:49.864   00:40:38	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0')
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:09:49.864   00:40:38	-- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:09:50.123  malloc_lvol_verify
00:09:50.123   00:40:39	-- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:09:50.382  d037d8ab-8db9-489c-874b-b4d674a6ce44
00:09:50.382   00:40:39	-- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:09:50.640  44372458-f54e-4c55-a7e6-a0d23dc596fa
00:09:50.640   00:40:39	-- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:09:50.899  /dev/nbd0
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:09:50.899  mke2fs 1.47.0 (5-Feb-2023)
00:09:50.899  Discarding device blocks:    0/4096         done                            
00:09:50.899  Creating filesystem with 4096 1k blocks and 1024 inodes
00:09:50.899  
00:09:50.899  Allocating group tables: 0/1   done                            
00:09:50.899  Writing inode tables: 0/1   done                            
00:09:50.899  Creating journal (1024 blocks): done
00:09:50.899  Writing superblocks and filesystem accounting information: 0/1   done
00:09:50.899  
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@51 -- # local i
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:50.899   00:40:39	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:51.252    00:40:40	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@41 -- # break
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@45 -- # return 0
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:09:51.252   00:40:40	-- bdev/nbd_common.sh@147 -- # return 0
00:09:51.252   00:40:40	-- bdev/blockdev.sh@324 -- # killprocess 963335
00:09:51.252   00:40:40	-- common/autotest_common.sh@936 -- # '[' -z 963335 ']'
00:09:51.252   00:40:40	-- common/autotest_common.sh@940 -- # kill -0 963335
00:09:51.252    00:40:40	-- common/autotest_common.sh@941 -- # uname
00:09:51.252   00:40:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:51.252    00:40:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 963335
00:09:51.252   00:40:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:51.252   00:40:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:51.252   00:40:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 963335'
00:09:51.252  killing process with pid 963335
00:09:51.252   00:40:40	-- common/autotest_common.sh@955 -- # kill 963335
00:09:51.252   00:40:40	-- common/autotest_common.sh@960 -- # wait 963335
00:09:55.478   00:40:44	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:09:55.478  
00:09:55.478  real	0m11.814s
00:09:55.478  user	0m14.283s
00:09:55.478  sys	0m1.896s
00:09:55.478   00:40:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:55.478   00:40:44	-- common/autotest_common.sh@10 -- # set +x
00:09:55.478  ************************************
00:09:55.478  END TEST bdev_nbd
00:09:55.478  ************************************
00:09:55.478   00:40:44	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:09:55.478   00:40:44	-- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']'
00:09:55.478   00:40:44	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:09:55.478  skipping fio tests on NVMe due to multi-ns failures.
00:09:55.478   00:40:44	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:55.478   00:40:44	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:55.478   00:40:44	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:09:55.478   00:40:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:55.478   00:40:44	-- common/autotest_common.sh@10 -- # set +x
00:09:55.478  ************************************
00:09:55.478  START TEST bdev_verify
00:09:55.478  ************************************
00:09:55.478   00:40:44	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:55.478  [2024-12-17 00:40:44.402042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:55.479  [2024-12-17 00:40:44.402105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965059 ]
00:09:55.479  EAL: No free 2048 kB hugepages reported on node 1
00:09:55.479  [2024-12-17 00:40:44.495539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:55.479  [2024-12-17 00:40:44.552378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:55.479  [2024-12-17 00:40:44.552383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:55.738  [2024-12-17 00:40:44.782190] 'OCF_Core' volume operations registered
00:09:55.738  [2024-12-17 00:40:44.784623] 'OCF_Cache' volume operations registered
00:09:55.738  [2024-12-17 00:40:44.787535] 'OCF Composite' volume operations registered
00:09:55.738  [2024-12-17 00:40:44.789997] 'SPDK_block_device' volume operations registered
00:09:59.027  Running I/O for 5 seconds...
00:10:04.301  
00:10:04.301                                                                                                  Latency(us)
00:10:04.301  
[2024-12-16T23:40:53.566Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:04.301  
[2024-12-16T23:40:53.566Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:04.301  	 Verification LBA range: start 0x0 length 0x1d1c0beb
00:10:04.301  	 Nvme0n1             :       5.01   17434.89      68.11       0.00     0.00    7303.10      98.39   10770.70
00:10:04.301  
[2024-12-16T23:40:53.566Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:04.301  	 Verification LBA range: start 0x1d1c0beb length 0x1d1c0beb
00:10:04.301  	 Nvme0n1             :       5.01   17512.46      68.41       0.00     0.00    7271.49     283.16   10656.72
00:10:04.301  
[2024-12-16T23:40:53.566Z]  ===================================================================================================================
00:10:04.301  
[2024-12-16T23:40:53.566Z]  Total                       :              34947.34     136.51       0.00     0.00    7287.26      98.39   10770.70
00:10:07.589  
00:10:07.589  real	0m12.431s
00:10:07.589  user	0m23.353s
00:10:07.589  sys	0m0.407s
00:10:07.589   00:40:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:07.589   00:40:56	-- common/autotest_common.sh@10 -- # set +x
00:10:07.589  ************************************
00:10:07.589  END TEST bdev_verify
00:10:07.589  ************************************
00:10:07.589   00:40:56	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:07.589   00:40:56	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:10:07.589   00:40:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:07.589   00:40:56	-- common/autotest_common.sh@10 -- # set +x
00:10:07.589  ************************************
00:10:07.589  START TEST bdev_verify_big_io
00:10:07.589  ************************************
00:10:07.589   00:40:56	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:07.848  [2024-12-17 00:40:56.874617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:07.848  [2024-12-17 00:40:56.874685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid966696 ]
00:10:07.848  EAL: No free 2048 kB hugepages reported on node 1
00:10:07.848  [2024-12-17 00:40:56.981878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:07.848  [2024-12-17 00:40:57.032779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:07.848  [2024-12-17 00:40:57.032783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.107  [2024-12-17 00:40:57.260415] 'OCF_Core' volume operations registered
00:10:08.107  [2024-12-17 00:40:57.262816] 'OCF_Cache' volume operations registered
00:10:08.107  [2024-12-17 00:40:57.265681] 'OCF Composite' volume operations registered
00:10:08.107  [2024-12-17 00:40:57.268110] 'SPDK_block_device' volume operations registered
00:10:11.395  Running I/O for 5 seconds...
00:10:16.670  
00:10:16.670                                                                                                  Latency(us)
00:10:16.670  
[2024-12-16T23:41:05.935Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:16.670  
[2024-12-16T23:41:05.935Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:16.670  	 Verification LBA range: start 0x0 length 0x1d1c0be
00:10:16.670  	 Nvme0n1             :       5.04    1308.82      81.80       0.00     0.00   96105.43    2293.76  147712.45
00:10:16.670  
[2024-12-16T23:41:05.935Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:16.670  	 Verification LBA range: start 0x1d1c0be length 0x1d1c0be
00:10:16.670  	 Nvme0n1             :       5.03    1335.79      83.49       0.00     0.00   94166.72    1852.10  125829.12
00:10:16.670  
[2024-12-16T23:41:05.935Z]  ===================================================================================================================
00:10:16.670  
[2024-12-16T23:41:05.935Z]  Total                       :               2644.61     165.29       0.00     0.00   95126.83    1852.10  147712.45
00:10:19.958  
00:10:19.958  real	0m12.365s
00:10:19.958  user	0m23.251s
00:10:19.958  sys	0m0.378s
00:10:19.958   00:41:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:19.959   00:41:09	-- common/autotest_common.sh@10 -- # set +x
00:10:19.959  ************************************
00:10:19.959  END TEST bdev_verify_big_io
00:10:19.959  ************************************
00:10:20.218   00:41:09	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:20.218   00:41:09	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:20.218   00:41:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:20.218   00:41:09	-- common/autotest_common.sh@10 -- # set +x
00:10:20.218  ************************************
00:10:20.218  START TEST bdev_write_zeroes
00:10:20.218  ************************************
00:10:20.218   00:41:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:20.218  [2024-12-17 00:41:09.297802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:20.218  [2024-12-17 00:41:09.297870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968388 ]
00:10:20.218  EAL: No free 2048 kB hugepages reported on node 1
00:10:20.218  [2024-12-17 00:41:09.406080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:20.218  [2024-12-17 00:41:09.456275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:20.477  [2024-12-17 00:41:09.666528] 'OCF_Core' volume operations registered
00:10:20.477  [2024-12-17 00:41:09.668664] 'OCF_Cache' volume operations registered
00:10:20.477  [2024-12-17 00:41:09.671242] 'OCF Composite' volume operations registered
00:10:20.477  [2024-12-17 00:41:09.673386] 'SPDK_block_device' volume operations registered
00:10:23.768  Running I/O for 1 seconds...
00:10:24.336  
00:10:24.336                                                                                                  Latency(us)
00:10:24.336  
[2024-12-16T23:41:13.601Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:24.336  
[2024-12-16T23:41:13.601Z]  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:24.336  	 Nvme0n1             :       1.00   62000.82     242.19       0.00     0.00    2057.64     733.72    2835.14
00:10:24.336  
[2024-12-16T23:41:13.601Z]  ===================================================================================================================
00:10:24.336  
[2024-12-16T23:41:13.601Z]  Total                       :              62000.82     242.19       0.00     0.00    2057.64     733.72    2835.14
00:10:28.527  
00:10:28.527  real	0m8.269s
00:10:28.527  user	0m7.160s
00:10:28.527  sys	0m0.364s
00:10:28.527   00:41:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:28.527   00:41:17	-- common/autotest_common.sh@10 -- # set +x
00:10:28.527  ************************************
00:10:28.527  END TEST bdev_write_zeroes
00:10:28.527  ************************************
00:10:28.527   00:41:17	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:28.527   00:41:17	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:28.527   00:41:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:28.527   00:41:17	-- common/autotest_common.sh@10 -- # set +x
00:10:28.527  ************************************
00:10:28.527  START TEST bdev_json_nonenclosed
00:10:28.527  ************************************
00:10:28.527   00:41:17	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:28.527  [2024-12-17 00:41:17.590513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:28.527  [2024-12-17 00:41:17.590564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969576 ]
00:10:28.527  EAL: No free 2048 kB hugepages reported on node 1
00:10:28.527  [2024-12-17 00:41:17.678952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:28.527  [2024-12-17 00:41:17.730461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:28.527  [2024-12-17 00:41:17.730574] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:10:28.527  [2024-12-17 00:41:17.730597] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:28.787  
00:10:28.787  real	0m0.252s
00:10:28.787  user	0m0.145s
00:10:28.787  sys	0m0.106s
00:10:28.787   00:41:17	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:28.787   00:41:17	-- common/autotest_common.sh@10 -- # set +x
00:10:28.787  ************************************
00:10:28.787  END TEST bdev_json_nonenclosed
00:10:28.787  ************************************
00:10:28.787   00:41:17	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:28.787   00:41:17	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:28.787   00:41:17	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:28.787   00:41:17	-- common/autotest_common.sh@10 -- # set +x
00:10:28.787  ************************************
00:10:28.787  START TEST bdev_json_nonarray
00:10:28.787  ************************************
00:10:28.787   00:41:17	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:28.787  [2024-12-17 00:41:17.900868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:28.787  [2024-12-17 00:41:17.900943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969608 ]
00:10:28.787  EAL: No free 2048 kB hugepages reported on node 1
00:10:28.787  [2024-12-17 00:41:18.007616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:29.046  [2024-12-17 00:41:18.058029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:29.046  [2024-12-17 00:41:18.058138] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:10:29.047  [2024-12-17 00:41:18.058160] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:29.047  
00:10:29.047  real	0m0.289s
00:10:29.047  user	0m0.159s
00:10:29.047  sys	0m0.128s
00:10:29.047   00:41:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:29.047   00:41:18	-- common/autotest_common.sh@10 -- # set +x
00:10:29.047  ************************************
00:10:29.047  END TEST bdev_json_nonarray
00:10:29.047  ************************************
00:10:29.047   00:41:18	-- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]]
00:10:29.047   00:41:18	-- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]]
00:10:29.047   00:41:18	-- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]]
00:10:29.047   00:41:18	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:10:29.047   00:41:18	-- bdev/blockdev.sh@809 -- # cleanup
00:10:29.047   00:41:18	-- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile
00:10:29.047   00:41:18	-- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:10:29.047   00:41:18	-- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]]
00:10:29.047   00:41:18	-- bdev/blockdev.sh@28 -- # [[ nvme == daos ]]
00:10:29.047   00:41:18	-- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]]
00:10:29.047   00:41:18	-- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]]
00:10:29.047  
00:10:29.047  real	1m9.856s
00:10:29.047  user	1m46.726s
00:10:29.047  sys	0m5.438s
00:10:29.047   00:41:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:29.047   00:41:18	-- common/autotest_common.sh@10 -- # set +x
00:10:29.047  ************************************
00:10:29.047  END TEST blockdev_nvme
00:10:29.047  ************************************
00:10:29.047    00:41:18	-- spdk/autotest.sh@206 -- # uname -s
00:10:29.047   00:41:18	-- spdk/autotest.sh@206 -- # [[ Linux == Linux ]]
00:10:29.047   00:41:18	-- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt
00:10:29.047   00:41:18	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:10:29.047   00:41:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:29.047   00:41:18	-- common/autotest_common.sh@10 -- # set +x
00:10:29.047  ************************************
00:10:29.047  START TEST blockdev_nvme_gpt
00:10:29.047  ************************************
00:10:29.047   00:41:18	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt
00:10:29.306  * Looking for test storage...
00:10:29.306  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev
00:10:29.306    00:41:18	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:10:29.306     00:41:18	-- common/autotest_common.sh@1690 -- # lcov --version
00:10:29.306     00:41:18	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:10:29.306    00:41:18	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:10:29.306    00:41:18	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:10:29.306    00:41:18	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:10:29.306    00:41:18	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:10:29.306    00:41:18	-- scripts/common.sh@335 -- # IFS=.-:
00:10:29.306    00:41:18	-- scripts/common.sh@335 -- # read -ra ver1
00:10:29.306    00:41:18	-- scripts/common.sh@336 -- # IFS=.-:
00:10:29.306    00:41:18	-- scripts/common.sh@336 -- # read -ra ver2
00:10:29.306    00:41:18	-- scripts/common.sh@337 -- # local 'op=<'
00:10:29.306    00:41:18	-- scripts/common.sh@339 -- # ver1_l=2
00:10:29.306    00:41:18	-- scripts/common.sh@340 -- # ver2_l=1
00:10:29.306    00:41:18	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:10:29.306    00:41:18	-- scripts/common.sh@343 -- # case "$op" in
00:10:29.306    00:41:18	-- scripts/common.sh@344 -- # : 1
00:10:29.306    00:41:18	-- scripts/common.sh@363 -- # (( v = 0 ))
00:10:29.306    00:41:18	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:29.306     00:41:18	-- scripts/common.sh@364 -- # decimal 1
00:10:29.306     00:41:18	-- scripts/common.sh@352 -- # local d=1
00:10:29.306     00:41:18	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:29.306     00:41:18	-- scripts/common.sh@354 -- # echo 1
00:10:29.306    00:41:18	-- scripts/common.sh@364 -- # ver1[v]=1
00:10:29.306     00:41:18	-- scripts/common.sh@365 -- # decimal 2
00:10:29.306     00:41:18	-- scripts/common.sh@352 -- # local d=2
00:10:29.306     00:41:18	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:29.306     00:41:18	-- scripts/common.sh@354 -- # echo 2
00:10:29.306    00:41:18	-- scripts/common.sh@365 -- # ver2[v]=2
00:10:29.306    00:41:18	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:10:29.306    00:41:18	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:10:29.306    00:41:18	-- scripts/common.sh@367 -- # return 0
00:10:29.306    00:41:18	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:29.306    00:41:18	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:10:29.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:29.306  		--rc genhtml_branch_coverage=1
00:10:29.306  		--rc genhtml_function_coverage=1
00:10:29.306  		--rc genhtml_legend=1
00:10:29.306  		--rc geninfo_all_blocks=1
00:10:29.306  		--rc geninfo_unexecuted_blocks=1
00:10:29.306  		
00:10:29.306  		'
00:10:29.306    00:41:18	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:10:29.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:29.306  		--rc genhtml_branch_coverage=1
00:10:29.306  		--rc genhtml_function_coverage=1
00:10:29.306  		--rc genhtml_legend=1
00:10:29.306  		--rc geninfo_all_blocks=1
00:10:29.306  		--rc geninfo_unexecuted_blocks=1
00:10:29.306  		
00:10:29.306  		'
00:10:29.306    00:41:18	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:10:29.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:29.306  		--rc genhtml_branch_coverage=1
00:10:29.306  		--rc genhtml_function_coverage=1
00:10:29.306  		--rc genhtml_legend=1
00:10:29.306  		--rc geninfo_all_blocks=1
00:10:29.306  		--rc geninfo_unexecuted_blocks=1
00:10:29.306  		
00:10:29.306  		'
00:10:29.306    00:41:18	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:10:29.306  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:29.306  		--rc genhtml_branch_coverage=1
00:10:29.306  		--rc genhtml_function_coverage=1
00:10:29.306  		--rc genhtml_legend=1
00:10:29.306  		--rc geninfo_all_blocks=1
00:10:29.306  		--rc geninfo_unexecuted_blocks=1
00:10:29.306  		
00:10:29.306  		'
00:10:29.306   00:41:18	-- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh
00:10:29.306    00:41:18	-- bdev/nbd_common.sh@6 -- # set -e
00:10:29.306   00:41:18	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:10:29.306   00:41:18	-- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:10:29.307   00:41:18	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json
00:10:29.307   00:41:18	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json
00:10:29.307   00:41:18	-- bdev/blockdev.sh@18 -- # :
00:10:29.307   00:41:18	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:10:29.307   00:41:18	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:10:29.307   00:41:18	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:10:29.307    00:41:18	-- bdev/blockdev.sh@672 -- # uname -s
00:10:29.307   00:41:18	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:10:29.307   00:41:18	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:10:29.307   00:41:18	-- bdev/blockdev.sh@680 -- # test_type=gpt
00:10:29.307   00:41:18	-- bdev/blockdev.sh@681 -- # crypto_device=
00:10:29.307   00:41:18	-- bdev/blockdev.sh@682 -- # dek=
00:10:29.307   00:41:18	-- bdev/blockdev.sh@683 -- # env_ctx=
00:10:29.307   00:41:18	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:10:29.307   00:41:18	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:10:29.307   00:41:18	-- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]]
00:10:29.307   00:41:18	-- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]]
00:10:29.307   00:41:18	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:10:29.307   00:41:18	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=969685
00:10:29.307   00:41:18	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:10:29.307   00:41:18	-- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' ''
00:10:29.307   00:41:18	-- bdev/blockdev.sh@47 -- # waitforlisten 969685
00:10:29.307   00:41:18	-- common/autotest_common.sh@829 -- # '[' -z 969685 ']'
00:10:29.307   00:41:18	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:29.307   00:41:18	-- common/autotest_common.sh@834 -- # local max_retries=100
00:10:29.307   00:41:18	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:29.307  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:29.307   00:41:18	-- common/autotest_common.sh@838 -- # xtrace_disable
00:10:29.307   00:41:18	-- common/autotest_common.sh@10 -- # set +x
00:10:29.307  [2024-12-17 00:41:18.524839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:29.307  [2024-12-17 00:41:18.524923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969685 ]
00:10:29.566  EAL: No free 2048 kB hugepages reported on node 1
00:10:29.566  [2024-12-17 00:41:18.632682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:29.566  [2024-12-17 00:41:18.679833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:10:29.566  [2024-12-17 00:41:18.679997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:29.825  [2024-12-17 00:41:18.834601] 'OCF_Core' volume operations registered
00:10:29.825  [2024-12-17 00:41:18.836793] 'OCF_Cache' volume operations registered
00:10:29.825  [2024-12-17 00:41:18.839436] 'OCF Composite' volume operations registered
00:10:29.825  [2024-12-17 00:41:18.841653] 'SPDK_block_device' volume operations registered
00:10:30.393   00:41:19	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:10:30.393   00:41:19	-- common/autotest_common.sh@862 -- # return 0
00:10:30.393   00:41:19	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:10:30.393   00:41:19	-- bdev/blockdev.sh@700 -- # setup_gpt_conf
00:10:30.393   00:41:19	-- bdev/blockdev.sh@102 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:10:33.682  Waiting for block devices as requested
00:10:33.682  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:10:33.682  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:10:33.682  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:10:33.941  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:10:33.941  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:10:33.941  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:10:33.941  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:10:34.201  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:10:34.201  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:10:34.201  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:10:34.459  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:10:34.459  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:10:34.459  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:10:34.718  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:10:34.718  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:10:34.718  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:10:34.977  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:10:34.977   00:41:24	-- bdev/blockdev.sh@103 -- # get_zoned_devs
00:10:34.977   00:41:24	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:10:34.977   00:41:24	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:10:34.977   00:41:24	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:10:34.977   00:41:24	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:10:34.977   00:41:24	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:10:34.977   00:41:24	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:10:34.977   00:41:24	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:10:34.977   00:41:24	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:10:34.977   00:41:24	-- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:5e:00.0/nvme/nvme0/nvme0n1')
00:10:34.977   00:41:24	-- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev
00:10:34.977   00:41:24	-- bdev/blockdev.sh@106 -- # gpt_nvme=
00:10:34.977   00:41:24	-- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}"
00:10:34.977   00:41:24	-- bdev/blockdev.sh@109 -- # [[ -z '' ]]
00:10:34.977   00:41:24	-- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1
00:10:34.977    00:41:24	-- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print
00:10:34.977   00:41:24	-- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:10:34.977  BYT;
00:10:34.977  /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:;'
00:10:34.977   00:41:24	-- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:10:34.977  BYT;
00:10:34.977  /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:10:34.977   00:41:24	-- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1
00:10:34.977   00:41:24	-- bdev/blockdev.sh@114 -- # break
00:10:34.977   00:41:24	-- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]]
00:10:34.977   00:41:24	-- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:10:34.977   00:41:24	-- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:10:34.978   00:41:24	-- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:10:34.978    00:41:24	-- bdev/blockdev.sh@128 -- # get_spdk_gpt_old
00:10:34.978    00:41:24	-- scripts/common.sh@410 -- # local spdk_guid
00:10:34.978    00:41:24	-- scripts/common.sh@412 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]]
00:10:34.978    00:41:24	-- scripts/common.sh@414 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:10:34.978    00:41:24	-- scripts/common.sh@415 -- # IFS='()'
00:10:34.978    00:41:24	-- scripts/common.sh@415 -- # read -r _ spdk_guid _
00:10:34.978     00:41:24	-- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:10:34.978    00:41:24	-- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:10:34.978    00:41:24	-- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:10:34.978    00:41:24	-- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:10:34.978   00:41:24	-- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:10:34.978    00:41:24	-- bdev/blockdev.sh@129 -- # get_spdk_gpt
00:10:34.978    00:41:24	-- scripts/common.sh@422 -- # local spdk_guid
00:10:34.978    00:41:24	-- scripts/common.sh@424 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]]
00:10:34.978    00:41:24	-- scripts/common.sh@426 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:10:34.978    00:41:24	-- scripts/common.sh@427 -- # IFS='()'
00:10:34.978    00:41:24	-- scripts/common.sh@427 -- # read -r _ spdk_guid _
00:10:34.978     00:41:24	-- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:10:34.978    00:41:24	-- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:10:34.978    00:41:24	-- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:10:34.978    00:41:24	-- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:10:34.978   00:41:24	-- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:10:34.978   00:41:24	-- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:10:35.915  The operation has completed successfully.
00:10:35.915   00:41:25	-- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:10:37.298  The operation has completed successfully.
00:10:37.298   00:41:26	-- bdev/blockdev.sh@132 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:10:40.587  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:10:40.587  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:10:40.588  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:10:43.874  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:10:43.874   00:41:32	-- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs
00:10:43.874   00:41:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.874   00:41:32	-- common/autotest_common.sh@10 -- # set +x
00:10:43.874  []
00:10:43.874   00:41:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:43.874   00:41:32	-- bdev/blockdev.sh@134 -- # setup_nvme_conf
00:10:43.874   00:41:32	-- bdev/blockdev.sh@79 -- # local json
00:10:43.874   00:41:32	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:10:43.874    00:41:32	-- bdev/blockdev.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:10:43.874   00:41:32	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:5e:00.0" } } ] }'\'''
00:10:43.874   00:41:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:43.874   00:41:32	-- common/autotest_common.sh@10 -- # set +x
00:10:46.411   00:41:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.411   00:41:35	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:10:46.411   00:41:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.411   00:41:35	-- common/autotest_common.sh@10 -- # set +x
00:10:46.411   00:41:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.411   00:41:35	-- bdev/blockdev.sh@738 -- # cat
00:10:46.411    00:41:35	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:10:46.411    00:41:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.411    00:41:35	-- common/autotest_common.sh@10 -- # set +x
00:10:46.411    00:41:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.411    00:41:35	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:10:46.411    00:41:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.411    00:41:35	-- common/autotest_common.sh@10 -- # set +x
00:10:46.411    00:41:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.411    00:41:35	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:10:46.411    00:41:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.411    00:41:35	-- common/autotest_common.sh@10 -- # set +x
00:10:46.411    00:41:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.411   00:41:35	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:10:46.411    00:41:35	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:10:46.411    00:41:35	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:10:46.411    00:41:35	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:46.411    00:41:35	-- common/autotest_common.sh@10 -- # set +x
00:10:46.670    00:41:35	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:46.670   00:41:35	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:10:46.671    00:41:35	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 512,' '  "num_blocks": 3907016704,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 2048,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 512,' '  "num_blocks": 3907016703,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 3907018752,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:10:46.671    00:41:35	-- bdev/blockdev.sh@747 -- # jq -r .name
00:10:46.671   00:41:35	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:10:46.671   00:41:35	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1
00:10:46.671   00:41:35	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:10:46.671   00:41:35	-- bdev/blockdev.sh@752 -- # killprocess 969685
00:10:46.671   00:41:35	-- common/autotest_common.sh@936 -- # '[' -z 969685 ']'
00:10:46.671   00:41:35	-- common/autotest_common.sh@940 -- # kill -0 969685
00:10:46.671    00:41:35	-- common/autotest_common.sh@941 -- # uname
00:10:46.671   00:41:35	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:10:46.671    00:41:35	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 969685
00:10:46.671   00:41:35	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:10:46.671   00:41:35	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:10:46.671   00:41:35	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 969685'
00:10:46.671  killing process with pid 969685
00:10:46.671   00:41:35	-- common/autotest_common.sh@955 -- # kill 969685
00:10:46.671   00:41:35	-- common/autotest_common.sh@960 -- # wait 969685
00:10:50.864   00:41:39	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:10:50.864   00:41:39	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:10:50.864   00:41:39	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:10:50.864   00:41:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:50.864   00:41:39	-- common/autotest_common.sh@10 -- # set +x
00:10:50.864  ************************************
00:10:50.864  START TEST bdev_hello_world
00:10:50.864  ************************************
00:10:50.864   00:41:39	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:10:50.864  [2024-12-17 00:41:39.969546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:50.864  [2024-12-17 00:41:39.969614] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973827 ]
00:10:50.864  EAL: No free 2048 kB hugepages reported on node 1
00:10:50.864  [2024-12-17 00:41:40.075149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:50.864  [2024-12-17 00:41:40.125264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:51.123  [2024-12-17 00:41:40.353045] 'OCF_Core' volume operations registered
00:10:51.123  [2024-12-17 00:41:40.355491] 'OCF_Cache' volume operations registered
00:10:51.123  [2024-12-17 00:41:40.358427] 'OCF Composite' volume operations registered
00:10:51.123  [2024-12-17 00:41:40.360866] 'SPDK_block_device' volume operations registered
00:10:54.547  [2024-12-17 00:41:43.225877] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:10:54.547  [2024-12-17 00:41:43.225918] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:10:54.547  [2024-12-17 00:41:43.225936] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:10:54.547  [2024-12-17 00:41:43.228219] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:10:54.547  [2024-12-17 00:41:43.228396] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:10:54.547  [2024-12-17 00:41:43.228416] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:10:54.547  [2024-12-17 00:41:43.231283] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:10:54.547  
00:10:54.547  [2024-12-17 00:41:43.231304] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:10:58.767  
00:10:58.767  real	0m7.285s
00:10:58.767  user	0m6.149s
00:10:58.767  sys	0m0.387s
00:10:58.767   00:41:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:58.767   00:41:47	-- common/autotest_common.sh@10 -- # set +x
00:10:58.767  ************************************
00:10:58.767  END TEST bdev_hello_world
00:10:58.768  ************************************
00:10:58.768   00:41:47	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:10:58.768   00:41:47	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:10:58.768   00:41:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:58.768   00:41:47	-- common/autotest_common.sh@10 -- # set +x
00:10:58.768  ************************************
00:10:58.768  START TEST bdev_bounds
00:10:58.768  ************************************
00:10:58.768   00:41:47	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:10:58.768   00:41:47	-- bdev/blockdev.sh@288 -- # bdevio_pid=974754
00:10:58.768   00:41:47	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:10:58.768   00:41:47	-- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:10:58.768   00:41:47	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 974754'
00:10:58.768  Process bdevio pid: 974754
00:10:58.768   00:41:47	-- bdev/blockdev.sh@291 -- # waitforlisten 974754
00:10:58.768   00:41:47	-- common/autotest_common.sh@829 -- # '[' -z 974754 ']'
00:10:58.768   00:41:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:58.768   00:41:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:10:58.768   00:41:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:58.768  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:58.768   00:41:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:10:58.768   00:41:47	-- common/autotest_common.sh@10 -- # set +x
00:10:58.768  [2024-12-17 00:41:47.311499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:58.768  [2024-12-17 00:41:47.311581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974754 ]
00:10:58.768  EAL: No free 2048 kB hugepages reported on node 1
00:10:58.768  [2024-12-17 00:41:47.423051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:58.768  [2024-12-17 00:41:47.476809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:58.768  [2024-12-17 00:41:47.476915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:58.768  [2024-12-17 00:41:47.476902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:10:58.768  [2024-12-17 00:41:47.680865] 'OCF_Core' volume operations registered
00:10:58.768  [2024-12-17 00:41:47.683047] 'OCF_Cache' volume operations registered
00:10:58.768  [2024-12-17 00:41:47.685601] 'OCF Composite' volume operations registered
00:10:58.768  [2024-12-17 00:41:47.687794] 'SPDK_block_device' volume operations registered
00:11:02.058   00:41:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:02.058   00:41:51	-- common/autotest_common.sh@862 -- # return 0
00:11:02.058   00:41:51	-- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests
00:11:02.058  I/O targets:
00:11:02.058    Nvme0n1p1: 3907016704 blocks of 512 bytes (1907723 MiB)
00:11:02.058    Nvme0n1p2: 3907016703 blocks of 512 bytes (1907723 MiB)
00:11:02.058  
00:11:02.058  
00:11:02.058       CUnit - A unit testing framework for C - Version 2.1-3
00:11:02.058       http://cunit.sourceforge.net/
00:11:02.058  
00:11:02.058  
00:11:02.058  Suite: bdevio tests on: Nvme0n1p2
00:11:02.058    Test: blockdev write read block ...passed
00:11:02.058    Test: blockdev write zeroes read block ...passed
00:11:02.058    Test: blockdev write zeroes read no split ...passed
00:11:02.058    Test: blockdev write zeroes read split ...passed
00:11:02.058    Test: blockdev write zeroes read split partial ...passed
00:11:02.058    Test: blockdev reset ...[2024-12-17 00:41:51.298109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:11:02.058  [2024-12-17 00:41:51.300508] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:11:02.058  passed
00:11:02.058    Test: blockdev write read 8 blocks ...passed
00:11:02.058    Test: blockdev write read size > 128k ...passed
00:11:02.058    Test: blockdev write read invalid size ...passed
00:11:02.058    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:02.058    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:02.058    Test: blockdev write read max offset ...passed
00:11:02.058    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:02.058    Test: blockdev writev readv 8 blocks ...passed
00:11:02.317    Test: blockdev writev readv 30 x 1block ...passed
00:11:02.317    Test: blockdev writev readv block ...passed
00:11:02.317    Test: blockdev writev readv size > 128k ...passed
00:11:02.317    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:02.317    Test: blockdev comparev and writev ...passed
00:11:02.317    Test: blockdev nvme passthru rw ...passed
00:11:02.317    Test: blockdev nvme passthru vendor specific ...passed
00:11:02.317    Test: blockdev nvme admin passthru ...passed
00:11:02.317    Test: blockdev copy ...passed
00:11:02.317  Suite: bdevio tests on: Nvme0n1p1
00:11:02.317    Test: blockdev write read block ...passed
00:11:02.317    Test: blockdev write zeroes read block ...passed
00:11:02.317    Test: blockdev write zeroes read no split ...passed
00:11:02.317    Test: blockdev write zeroes read split ...passed
00:11:02.317    Test: blockdev write zeroes read split partial ...passed
00:11:02.317    Test: blockdev reset ...[2024-12-17 00:41:51.367892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:11:02.317  [2024-12-17 00:41:51.370150] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:11:02.317  passed
00:11:02.317    Test: blockdev write read 8 blocks ...passed
00:11:02.317    Test: blockdev write read size > 128k ...passed
00:11:02.317    Test: blockdev write read invalid size ...passed
00:11:02.317    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:11:02.317    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:11:02.317    Test: blockdev write read max offset ...passed
00:11:02.317    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:11:02.317    Test: blockdev writev readv 8 blocks ...passed
00:11:02.317    Test: blockdev writev readv 30 x 1block ...passed
00:11:02.317    Test: blockdev writev readv block ...passed
00:11:02.317    Test: blockdev writev readv size > 128k ...passed
00:11:02.317    Test: blockdev writev readv size > 128k in two iovs ...passed
00:11:02.317    Test: blockdev comparev and writev ...passed
00:11:02.317    Test: blockdev nvme passthru rw ...passed
00:11:02.317    Test: blockdev nvme passthru vendor specific ...passed
00:11:02.317    Test: blockdev nvme admin passthru ...passed
00:11:02.317    Test: blockdev copy ...passed
00:11:02.317  
00:11:02.318  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:11:02.318                suites      2      2    n/a      0        0
00:11:02.318                 tests     46     46     46      0        0
00:11:02.318               asserts    260    260    260      0      n/a
00:11:02.318  
00:11:02.318  Elapsed time =    0.281 seconds
00:11:02.318  0
00:11:02.318   00:41:51	-- bdev/blockdev.sh@293 -- # killprocess 974754
00:11:02.318   00:41:51	-- common/autotest_common.sh@936 -- # '[' -z 974754 ']'
00:11:02.318   00:41:51	-- common/autotest_common.sh@940 -- # kill -0 974754
00:11:02.318    00:41:51	-- common/autotest_common.sh@941 -- # uname
00:11:02.318   00:41:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:02.318    00:41:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 974754
00:11:02.318   00:41:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:02.318   00:41:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:02.318   00:41:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 974754'
00:11:02.318  killing process with pid 974754
00:11:02.318   00:41:51	-- common/autotest_common.sh@955 -- # kill 974754
00:11:02.318   00:41:51	-- common/autotest_common.sh@960 -- # wait 974754
00:11:06.511   00:41:55	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:11:06.511  
00:11:06.511  real	0m8.233s
00:11:06.511  user	0m24.032s
00:11:06.511  sys	0m0.690s
00:11:06.511   00:41:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:06.511   00:41:55	-- common/autotest_common.sh@10 -- # set +x
00:11:06.511  ************************************
00:11:06.511  END TEST bdev_bounds
00:11:06.511  ************************************
00:11:06.511   00:41:55	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:11:06.511   00:41:55	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:11:06.511   00:41:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:06.511   00:41:55	-- common/autotest_common.sh@10 -- # set +x
00:11:06.511  ************************************
00:11:06.511  START TEST bdev_nbd
00:11:06.511  ************************************
00:11:06.511   00:41:55	-- common/autotest_common.sh@1114 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:11:06.511    00:41:55	-- bdev/blockdev.sh@298 -- # uname -s
00:11:06.511   00:41:55	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:11:06.511   00:41:55	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:06.511   00:41:55	-- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:11:06.511   00:41:55	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:11:06.511   00:41:55	-- bdev/blockdev.sh@302 -- # local bdev_all
00:11:06.511   00:41:55	-- bdev/blockdev.sh@303 -- # local bdev_num=2
00:11:06.511   00:41:55	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:11:06.511   00:41:55	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:11:06.511   00:41:55	-- bdev/blockdev.sh@309 -- # local nbd_all
00:11:06.511   00:41:55	-- bdev/blockdev.sh@310 -- # bdev_num=2
00:11:06.511   00:41:55	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:06.511   00:41:55	-- bdev/blockdev.sh@312 -- # local nbd_list
00:11:06.511   00:41:55	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:11:06.511   00:41:55	-- bdev/blockdev.sh@313 -- # local bdev_list
00:11:06.511   00:41:55	-- bdev/blockdev.sh@316 -- # nbd_pid=975897
00:11:06.511   00:41:55	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:11:06.511   00:41:55	-- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:11:06.511   00:41:55	-- bdev/blockdev.sh@318 -- # waitforlisten 975897 /var/tmp/spdk-nbd.sock
00:11:06.511   00:41:55	-- common/autotest_common.sh@829 -- # '[' -z 975897 ']'
00:11:06.511   00:41:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:11:06.511   00:41:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:06.511   00:41:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:11:06.511  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:11:06.511   00:41:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:06.511   00:41:55	-- common/autotest_common.sh@10 -- # set +x
00:11:06.511  [2024-12-17 00:41:55.598568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:06.511  [2024-12-17 00:41:55.598634] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:06.511  EAL: No free 2048 kB hugepages reported on node 1
00:11:06.511  [2024-12-17 00:41:55.706476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:06.511  [2024-12-17 00:41:55.753277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:06.771  [2024-12-17 00:41:55.971612] 'OCF_Core' volume operations registered
00:11:06.771  [2024-12-17 00:41:55.974138] 'OCF_Cache' volume operations registered
00:11:06.771  [2024-12-17 00:41:55.977053] 'OCF Composite' volume operations registered
00:11:06.771  [2024-12-17 00:41:55.979501] 'SPDK_block_device' volume operations registered
00:11:10.964   00:41:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:10.964   00:41:59	-- common/autotest_common.sh@862 -- # return 0
00:11:10.964   00:41:59	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@24 -- # local i
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:11:10.964    00:41:59	-- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:11:10.964    00:41:59	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:11:10.964   00:41:59	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:11:10.964   00:41:59	-- common/autotest_common.sh@867 -- # local i
00:11:10.964   00:41:59	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:10.964   00:41:59	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:10.964   00:41:59	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:11:10.964   00:41:59	-- common/autotest_common.sh@871 -- # break
00:11:10.964   00:41:59	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:10.964   00:41:59	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:10.964   00:41:59	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:10.964  1+0 records in
00:11:10.964  1+0 records out
00:11:10.964  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263053 s, 15.6 MB/s
00:11:10.964    00:41:59	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:10.964   00:41:59	-- common/autotest_common.sh@884 -- # size=4096
00:11:10.964   00:41:59	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:10.964   00:41:59	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:10.964   00:41:59	-- common/autotest_common.sh@887 -- # return 0
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:10.964   00:41:59	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:11:10.964    00:41:59	-- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:11:10.964   00:42:00	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:11:10.964    00:42:00	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:11:10.964   00:42:00	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:11:10.964   00:42:00	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:11:10.964   00:42:00	-- common/autotest_common.sh@867 -- # local i
00:11:10.964   00:42:00	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:10.964   00:42:00	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:10.964   00:42:00	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:11:10.964   00:42:00	-- common/autotest_common.sh@871 -- # break
00:11:10.964   00:42:00	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:10.964   00:42:00	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:10.964   00:42:00	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:10.964  1+0 records in
00:11:10.964  1+0 records out
00:11:10.964  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276386 s, 14.8 MB/s
00:11:10.964    00:42:00	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:10.964   00:42:00	-- common/autotest_common.sh@884 -- # size=4096
00:11:10.964   00:42:00	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:10.964   00:42:00	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:10.964   00:42:00	-- common/autotest_common.sh@887 -- # return 0
00:11:10.964   00:42:00	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:11:10.964   00:42:00	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:11:10.964    00:42:00	-- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:11:11.223    {
00:11:11.223      "nbd_device": "/dev/nbd0",
00:11:11.223      "bdev_name": "Nvme0n1p1"
00:11:11.223    },
00:11:11.223    {
00:11:11.223      "nbd_device": "/dev/nbd1",
00:11:11.223      "bdev_name": "Nvme0n1p2"
00:11:11.223    }
00:11:11.223  ]'
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:11:11.223    00:42:00	-- bdev/nbd_common.sh@119 -- # echo '[
00:11:11.223    {
00:11:11.223      "nbd_device": "/dev/nbd0",
00:11:11.223      "bdev_name": "Nvme0n1p1"
00:11:11.223    },
00:11:11.223    {
00:11:11.223      "nbd_device": "/dev/nbd1",
00:11:11.223      "bdev_name": "Nvme0n1p2"
00:11:11.223    }
00:11:11.223  ]'
00:11:11.223    00:42:00	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@51 -- # local i
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:11.223   00:42:00	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:11.482    00:42:00	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@41 -- # break
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:11.482   00:42:00	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:11.741    00:42:00	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@41 -- # break
00:11:11.741   00:42:00	-- bdev/nbd_common.sh@45 -- # return 0
00:11:11.741    00:42:00	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:11.741    00:42:00	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:11.741     00:42:00	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:12.000    00:42:01	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:11:12.000     00:42:01	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:11:12.000     00:42:01	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:12.000    00:42:01	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:11:12.000     00:42:01	-- bdev/nbd_common.sh@65 -- # echo ''
00:11:12.000     00:42:01	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:12.000     00:42:01	-- bdev/nbd_common.sh@65 -- # true
00:11:12.000    00:42:01	-- bdev/nbd_common.sh@65 -- # count=0
00:11:12.000    00:42:01	-- bdev/nbd_common.sh@66 -- # echo 0
00:11:12.000   00:42:01	-- bdev/nbd_common.sh@122 -- # count=0
00:11:12.000   00:42:01	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:11:12.000   00:42:01	-- bdev/nbd_common.sh@127 -- # return 0
00:11:12.000   00:42:01	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:11:12.000   00:42:01	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@12 -- # local i
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:11:12.001   00:42:01	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:11:12.259  /dev/nbd0
00:11:12.259    00:42:01	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:11:12.259   00:42:01	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:11:12.259   00:42:01	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:11:12.259   00:42:01	-- common/autotest_common.sh@867 -- # local i
00:11:12.259   00:42:01	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:12.259   00:42:01	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:12.259   00:42:01	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:11:12.259   00:42:01	-- common/autotest_common.sh@871 -- # break
00:11:12.259   00:42:01	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:12.259   00:42:01	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:12.259   00:42:01	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:12.259  1+0 records in
00:11:12.259  1+0 records out
00:11:12.259  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268965 s, 15.2 MB/s
00:11:12.259    00:42:01	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:12.259   00:42:01	-- common/autotest_common.sh@884 -- # size=4096
00:11:12.259   00:42:01	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:12.259   00:42:01	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:12.259   00:42:01	-- common/autotest_common.sh@887 -- # return 0
00:11:12.259   00:42:01	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:12.259   00:42:01	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:11:12.259   00:42:01	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:11:12.519  /dev/nbd1
00:11:12.519    00:42:01	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:11:12.519   00:42:01	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:11:12.519   00:42:01	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:11:12.519   00:42:01	-- common/autotest_common.sh@867 -- # local i
00:11:12.519   00:42:01	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:11:12.519   00:42:01	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:11:12.519   00:42:01	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:11:12.519   00:42:01	-- common/autotest_common.sh@871 -- # break
00:11:12.519   00:42:01	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:11:12.519   00:42:01	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:11:12.519   00:42:01	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:11:12.519  1+0 records in
00:11:12.519  1+0 records out
00:11:12.519  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300767 s, 13.6 MB/s
00:11:12.519    00:42:01	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:12.519   00:42:01	-- common/autotest_common.sh@884 -- # size=4096
00:11:12.519   00:42:01	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:11:12.519   00:42:01	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:11:12.519   00:42:01	-- common/autotest_common.sh@887 -- # return 0
00:11:12.519   00:42:01	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:11:12.519   00:42:01	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:11:12.519    00:42:01	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:12.519    00:42:01	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:12.519     00:42:01	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:12.777    00:42:02	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:11:12.777    {
00:11:12.777      "nbd_device": "/dev/nbd0",
00:11:12.777      "bdev_name": "Nvme0n1p1"
00:11:12.777    },
00:11:12.777    {
00:11:12.777      "nbd_device": "/dev/nbd1",
00:11:12.777      "bdev_name": "Nvme0n1p2"
00:11:12.777    }
00:11:12.777  ]'
00:11:12.777     00:42:02	-- bdev/nbd_common.sh@64 -- # echo '[
00:11:12.777    {
00:11:12.777      "nbd_device": "/dev/nbd0",
00:11:12.777      "bdev_name": "Nvme0n1p1"
00:11:12.777    },
00:11:12.777    {
00:11:12.777      "nbd_device": "/dev/nbd1",
00:11:12.777      "bdev_name": "Nvme0n1p2"
00:11:12.777    }
00:11:12.777  ]'
00:11:12.777     00:42:02	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:13.037    00:42:02	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:11:13.037  /dev/nbd1'
00:11:13.037     00:42:02	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:11:13.037  /dev/nbd1'
00:11:13.037     00:42:02	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:13.037    00:42:02	-- bdev/nbd_common.sh@65 -- # count=2
00:11:13.037    00:42:02	-- bdev/nbd_common.sh@66 -- # echo 2
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@95 -- # count=2
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@71 -- # local operation=write
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:11:13.037  256+0 records in
00:11:13.037  256+0 records out
00:11:13.037  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00768671 s, 136 MB/s
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:11:13.037  256+0 records in
00:11:13.037  256+0 records out
00:11:13.037  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03558 s, 29.5 MB/s
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:11:13.037  256+0 records in
00:11:13.037  256+0 records out
00:11:13.037  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0433881 s, 24.2 MB/s
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@51 -- # local i
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:13.037   00:42:02	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:13.296    00:42:02	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@41 -- # break
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@45 -- # return 0
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:13.296   00:42:02	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:11:13.556    00:42:02	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@41 -- # break
00:11:13.556   00:42:02	-- bdev/nbd_common.sh@45 -- # return 0
00:11:13.556    00:42:02	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:11:13.556    00:42:02	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:13.556     00:42:02	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:11:13.815    00:42:03	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:11:13.815     00:42:03	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:11:13.815     00:42:03	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:11:13.815    00:42:03	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:11:13.815     00:42:03	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:11:13.815     00:42:03	-- bdev/nbd_common.sh@65 -- # echo ''
00:11:13.815     00:42:03	-- bdev/nbd_common.sh@65 -- # true
00:11:13.815    00:42:03	-- bdev/nbd_common.sh@65 -- # count=0
00:11:13.815    00:42:03	-- bdev/nbd_common.sh@66 -- # echo 0
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@104 -- # count=0
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@109 -- # return 0
00:11:13.815   00:42:03	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:11:13.815   00:42:03	-- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:11:14.074  malloc_lvol_verify
00:11:14.074   00:42:03	-- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:11:14.333  ae73e1d0-91b1-47e7-862a-88fb9f453fcd
00:11:14.333   00:42:03	-- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:11:14.592  93452bf8-fd12-4415-b138-40e152d1904e
00:11:14.592   00:42:03	-- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:11:14.851  /dev/nbd0
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:11:14.851  mke2fs 1.47.0 (5-Feb-2023)
00:11:14.851  Discarding device blocks:    0/4096         done                            
00:11:14.851  Creating filesystem with 4096 1k blocks and 1024 inodes
00:11:14.851  
00:11:14.851  Allocating group tables: 0/1   done                            
00:11:14.851  Writing inode tables: 0/1   done                            
00:11:14.851  Creating journal (1024 blocks): done
00:11:14.851  Writing superblocks and filesystem accounting information: 0/1   done
00:11:14.851  
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@51 -- # local i
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:11:14.851   00:42:03	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:11:15.110    00:42:04	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@41 -- # break
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@45 -- # return 0
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:11:15.110   00:42:04	-- bdev/nbd_common.sh@147 -- # return 0
00:11:15.110   00:42:04	-- bdev/blockdev.sh@324 -- # killprocess 975897
00:11:15.110   00:42:04	-- common/autotest_common.sh@936 -- # '[' -z 975897 ']'
00:11:15.110   00:42:04	-- common/autotest_common.sh@940 -- # kill -0 975897
00:11:15.110    00:42:04	-- common/autotest_common.sh@941 -- # uname
00:11:15.110   00:42:04	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:15.110    00:42:04	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 975897
00:11:15.110   00:42:04	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:15.110   00:42:04	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:15.110   00:42:04	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 975897'
00:11:15.110  killing process with pid 975897
00:11:15.110   00:42:04	-- common/autotest_common.sh@955 -- # kill 975897
00:11:15.110   00:42:04	-- common/autotest_common.sh@960 -- # wait 975897
00:11:19.319   00:42:08	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:11:19.319  
00:11:19.319  real	0m12.729s
00:11:19.319  user	0m15.317s
00:11:19.319  sys	0m2.569s
00:11:19.319   00:42:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:19.319   00:42:08	-- common/autotest_common.sh@10 -- # set +x
00:11:19.319  ************************************
00:11:19.319  END TEST bdev_nbd
00:11:19.319  ************************************
00:11:19.319   00:42:08	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:11:19.319   00:42:08	-- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']'
00:11:19.319   00:42:08	-- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']'
00:11:19.319   00:42:08	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:11:19.319  skipping fio tests on NVMe due to multi-ns failures.
00:11:19.319   00:42:08	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:11:19.319   00:42:08	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:11:19.319   00:42:08	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:11:19.319   00:42:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:19.319   00:42:08	-- common/autotest_common.sh@10 -- # set +x
00:11:19.319  ************************************
00:11:19.319  START TEST bdev_verify
00:11:19.319  ************************************
00:11:19.319   00:42:08	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:11:19.319  [2024-12-17 00:42:08.371524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:19.319  [2024-12-17 00:42:08.371602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978290 ]
00:11:19.319  EAL: No free 2048 kB hugepages reported on node 1
00:11:19.319  [2024-12-17 00:42:08.479434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:19.319  [2024-12-17 00:42:08.531475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:19.319  [2024-12-17 00:42:08.531479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:19.579  [2024-12-17 00:42:08.755429] 'OCF_Core' volume operations registered
00:11:19.579  [2024-12-17 00:42:08.757861] 'OCF_Cache' volume operations registered
00:11:19.579  [2024-12-17 00:42:08.760792] 'OCF Composite' volume operations registered
00:11:19.579  [2024-12-17 00:42:08.763239] 'SPDK_block_device' volume operations registered
00:11:22.868  Running I/O for 5 seconds...
00:11:28.142  
00:11:28.142                                                                                                  Latency(us)
00:11:28.142  
[2024-12-16T23:42:17.407Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:28.142  
[2024-12-16T23:42:17.407Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:28.142  	 Verification LBA range: start 0x0 length 0xe8e0580
00:11:28.142  	 Nvme0n1p1           :       5.02    7615.85      29.75       0.00     0.00   16759.27    1923.34   17438.27
00:11:28.142  
[2024-12-16T23:42:17.407Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:28.142  	 Verification LBA range: start 0xe8e0580 length 0xe8e0580
00:11:28.142  	 Nvme0n1p1           :       5.03    7675.49      29.98       0.00     0.00   16605.31    2835.14   16640.45
00:11:28.142  
[2024-12-16T23:42:17.407Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:28.142  	 Verification LBA range: start 0x0 length 0xe8e057f
00:11:28.142  	 Nvme0n1p2           :       5.03    7592.28      29.66       0.00     0.00   16787.40    3547.49   19375.86
00:11:28.142  
[2024-12-16T23:42:17.407Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:28.142  	 Verification LBA range: start 0xe8e057f length 0xe8e057f
00:11:28.142  	 Nvme0n1p2           :       5.02    7682.73      30.01       0.00     0.00   16614.96    2735.42   16526.47
00:11:28.142  
[2024-12-16T23:42:17.407Z]  ===================================================================================================================
00:11:28.142  
[2024-12-16T23:42:17.407Z]  Total                       :              30566.34     119.40       0.00     0.00   16691.31    1923.34   19375.86
00:11:32.336  
00:11:32.336  real	0m12.456s
00:11:32.336  user	0m23.363s
00:11:32.336  sys	0m0.422s
00:11:32.336   00:42:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:32.336   00:42:20	-- common/autotest_common.sh@10 -- # set +x
00:11:32.336  ************************************
00:11:32.336  END TEST bdev_verify
00:11:32.336  ************************************
00:11:32.336   00:42:20	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:11:32.336   00:42:20	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:11:32.336   00:42:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:32.336   00:42:20	-- common/autotest_common.sh@10 -- # set +x
00:11:32.336  ************************************
00:11:32.336  START TEST bdev_verify_big_io
00:11:32.336  ************************************
00:11:32.336   00:42:20	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:11:32.336  [2024-12-17 00:42:20.879520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:32.336  [2024-12-17 00:42:20.879604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979921 ]
00:11:32.336  EAL: No free 2048 kB hugepages reported on node 1
00:11:32.336  [2024-12-17 00:42:20.987330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:32.336  [2024-12-17 00:42:21.041751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:32.336  [2024-12-17 00:42:21.041755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:32.336  [2024-12-17 00:42:21.269581] 'OCF_Core' volume operations registered
00:11:32.336  [2024-12-17 00:42:21.272016] 'OCF_Cache' volume operations registered
00:11:32.336  [2024-12-17 00:42:21.274927] 'OCF Composite' volume operations registered
00:11:32.336  [2024-12-17 00:42:21.277363] 'SPDK_block_device' volume operations registered
00:11:35.624  Running I/O for 5 seconds...
00:11:40.898  
00:11:40.898                                                                                                  Latency(us)
00:11:40.898  
[2024-12-16T23:42:30.163Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:40.898  
[2024-12-16T23:42:30.163Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:40.898  	 Verification LBA range: start 0x0 length 0xe8e058
00:11:40.898  	 Nvme0n1p1           :       5.24     644.18      40.26       0.00     0.00  193743.16   66561.78  221568.67
00:11:40.898  
[2024-12-16T23:42:30.163Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:40.898  	 Verification LBA range: start 0xe8e058 length 0xe8e058
00:11:40.898  	 Nvme0n1p1           :       5.26     659.54      41.22       0.00     0.00  191524.19    2578.70  222480.47
00:11:40.898  
[2024-12-16T23:42:30.164Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:40.899  	 Verification LBA range: start 0x0 length 0xe8e057
00:11:40.899  	 Nvme0n1p2           :       5.26     659.56      41.22       0.00     0.00  187933.64    3134.33  210627.01
00:11:40.899  
[2024-12-16T23:42:30.164Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:40.899  	 Verification LBA range: start 0xe8e057 length 0xe8e057
00:11:40.899  	 Nvme0n1p2           :       5.26     658.37      41.15       0.00     0.00  188345.71    2778.16  203332.56
00:11:40.899  
[2024-12-16T23:42:30.164Z]  ===================================================================================================================
00:11:40.899  
[2024-12-16T23:42:30.164Z]  Total                       :               2621.65     163.85       0.00     0.00  190365.23    2578.70  222480.47
00:11:45.094  
00:11:45.094  real	0m12.634s
00:11:45.094  user	0m23.754s
00:11:45.094  sys	0m0.390s
00:11:45.094   00:42:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:45.094   00:42:33	-- common/autotest_common.sh@10 -- # set +x
00:11:45.094  ************************************
00:11:45.094  END TEST bdev_verify_big_io
00:11:45.094  ************************************
00:11:45.094   00:42:33	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:45.094   00:42:33	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:11:45.094   00:42:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:45.094   00:42:33	-- common/autotest_common.sh@10 -- # set +x
00:11:45.094  ************************************
00:11:45.094  START TEST bdev_write_zeroes
00:11:45.094  ************************************
00:11:45.094   00:42:33	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:45.094  [2024-12-17 00:42:33.553073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:45.094  [2024-12-17 00:42:33.553143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981568 ]
00:11:45.094  EAL: No free 2048 kB hugepages reported on node 1
00:11:45.094  [2024-12-17 00:42:33.659735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:45.094  [2024-12-17 00:42:33.710824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:45.094  [2024-12-17 00:42:33.936982] 'OCF_Core' volume operations registered
00:11:45.094  [2024-12-17 00:42:33.939399] 'OCF_Cache' volume operations registered
00:11:45.094  [2024-12-17 00:42:33.942366] 'OCF Composite' volume operations registered
00:11:45.094  [2024-12-17 00:42:33.944789] 'SPDK_block_device' volume operations registered
00:11:47.637  Running I/O for 1 seconds...
00:11:48.575  
00:11:48.575                                                                                                  Latency(us)
00:11:48.575  
[2024-12-16T23:42:37.840Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:48.575  
[2024-12-16T23:42:37.840Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:48.575  	 Nvme0n1p1           :       1.01   24236.13      94.67       0.00     0.00    5268.76    3177.07    6154.69
00:11:48.575  
[2024-12-16T23:42:37.840Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:48.575  	 Nvme0n1p2           :       1.01   24190.64      94.49       0.00     0.00    5269.27    2849.39    6097.70
00:11:48.575  
[2024-12-16T23:42:37.840Z]  ===================================================================================================================
00:11:48.575  
[2024-12-16T23:42:37.840Z]  Total                       :              48426.76     189.17       0.00     0.00    5269.01    2849.39    6154.69
00:11:52.768  
00:11:52.768  real	0m8.331s
00:11:52.768  user	0m7.215s
00:11:52.768  sys	0m0.364s
00:11:52.768   00:42:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:52.768   00:42:41	-- common/autotest_common.sh@10 -- # set +x
00:11:52.768  ************************************
00:11:52.768  END TEST bdev_write_zeroes
00:11:52.768  ************************************
00:11:52.768   00:42:41	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:52.768   00:42:41	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:11:52.768   00:42:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:52.768   00:42:41	-- common/autotest_common.sh@10 -- # set +x
00:11:52.768  ************************************
00:11:52.768  START TEST bdev_json_nonenclosed
00:11:52.768  ************************************
00:11:52.768   00:42:41	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:52.768  [2024-12-17 00:42:41.926634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:52.768  [2024-12-17 00:42:41.926703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982731 ]
00:11:52.768  EAL: No free 2048 kB hugepages reported on node 1
00:11:53.027  [2024-12-17 00:42:42.034167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:53.027  [2024-12-17 00:42:42.084530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:53.027  [2024-12-17 00:42:42.084648] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:11:53.027  [2024-12-17 00:42:42.084673] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:53.027  
00:11:53.027  real	0m0.285s
00:11:53.027  user	0m0.150s
00:11:53.027  sys	0m0.133s
00:11:53.027   00:42:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:53.027   00:42:42	-- common/autotest_common.sh@10 -- # set +x
00:11:53.027  ************************************
00:11:53.027  END TEST bdev_json_nonenclosed
00:11:53.027  ************************************
00:11:53.027   00:42:42	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:53.027   00:42:42	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:11:53.027   00:42:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:53.027   00:42:42	-- common/autotest_common.sh@10 -- # set +x
00:11:53.027  ************************************
00:11:53.027  START TEST bdev_json_nonarray
00:11:53.027  ************************************
00:11:53.027   00:42:42	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:53.027  [2024-12-17 00:42:42.261089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:53.027  [2024-12-17 00:42:42.261157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982837 ]
00:11:53.286  EAL: No free 2048 kB hugepages reported on node 1
00:11:53.286  [2024-12-17 00:42:42.368923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:53.287  [2024-12-17 00:42:42.418961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:53.287  [2024-12-17 00:42:42.419081] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:11:53.287  [2024-12-17 00:42:42.419104] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:53.287  
00:11:53.287  real	0m0.289s
00:11:53.287  user	0m0.161s
00:11:53.287  sys	0m0.126s
00:11:53.287   00:42:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:53.287   00:42:42	-- common/autotest_common.sh@10 -- # set +x
00:11:53.287  ************************************
00:11:53.287  END TEST bdev_json_nonarray
00:11:53.287  ************************************
00:11:53.287   00:42:42	-- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]]
00:11:53.287   00:42:42	-- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]]
00:11:53.287   00:42:42	-- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:11:53.287   00:42:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:53.287   00:42:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:53.287   00:42:42	-- common/autotest_common.sh@10 -- # set +x
00:11:53.546  ************************************
00:11:53.546  START TEST bdev_gpt_uuid
00:11:53.546  ************************************
00:11:53.546   00:42:42	-- common/autotest_common.sh@1114 -- # bdev_gpt_uuid
00:11:53.546   00:42:42	-- bdev/blockdev.sh@612 -- # local bdev
00:11:53.546   00:42:42	-- bdev/blockdev.sh@614 -- # start_spdk_tgt
00:11:53.546   00:42:42	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=982860
00:11:53.546   00:42:42	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:53.546   00:42:42	-- bdev/blockdev.sh@47 -- # waitforlisten 982860
00:11:53.546   00:42:42	-- common/autotest_common.sh@829 -- # '[' -z 982860 ']'
00:11:53.546   00:42:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:53.546   00:42:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:11:53.546   00:42:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:53.546  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:53.546   00:42:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:11:53.546   00:42:42	-- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' ''
00:11:53.546   00:42:42	-- common/autotest_common.sh@10 -- # set +x
00:11:53.546  [2024-12-17 00:42:42.606612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:53.546  [2024-12-17 00:42:42.606681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982860 ]
00:11:53.546  EAL: No free 2048 kB hugepages reported on node 1
00:11:53.546  [2024-12-17 00:42:42.701624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:53.546  [2024-12-17 00:42:42.748851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:11:53.546  [2024-12-17 00:42:42.749022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:11:53.806  [2024-12-17 00:42:42.900964] 'OCF_Core' volume operations registered
00:11:53.806  [2024-12-17 00:42:42.903120] 'OCF_Cache' volume operations registered
00:11:53.806  [2024-12-17 00:42:42.905661] 'OCF Composite' volume operations registered
00:11:53.806  [2024-12-17 00:42:42.907822] 'SPDK_block_device' volume operations registered
00:11:54.374   00:42:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:11:54.374   00:42:43	-- common/autotest_common.sh@862 -- # return 0
00:11:54.374   00:42:43	-- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:11:54.374   00:42:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:54.374   00:42:43	-- common/autotest_common.sh@10 -- # set +x
00:11:57.664  Some configs were skipped because the RPC state that can call them passed over.
00:11:57.664   00:42:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.664   00:42:46	-- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine
00:11:57.664   00:42:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.664   00:42:46	-- common/autotest_common.sh@10 -- # set +x
00:11:57.664   00:42:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.664    00:42:46	-- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:11:57.664    00:42:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.664    00:42:46	-- common/autotest_common.sh@10 -- # set +x
00:11:57.664    00:42:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.664   00:42:46	-- bdev/blockdev.sh@619 -- # bdev='[
00:11:57.664  {
00:11:57.664  "name": "Nvme0n1p1",
00:11:57.664  "aliases": [
00:11:57.664  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:11:57.664  ],
00:11:57.664  "product_name": "GPT Disk",
00:11:57.664  "block_size": 512,
00:11:57.664  "num_blocks": 3907016704,
00:11:57.664  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:11:57.664  "assigned_rate_limits": {
00:11:57.664  "rw_ios_per_sec": 0,
00:11:57.664  "rw_mbytes_per_sec": 0,
00:11:57.664  "r_mbytes_per_sec": 0,
00:11:57.664  "w_mbytes_per_sec": 0
00:11:57.664  },
00:11:57.664  "claimed": false,
00:11:57.664  "zoned": false,
00:11:57.664  "supported_io_types": {
00:11:57.664  "read": true,
00:11:57.664  "write": true,
00:11:57.664  "unmap": true,
00:11:57.664  "write_zeroes": true,
00:11:57.664  "flush": true,
00:11:57.664  "reset": true,
00:11:57.664  "compare": false,
00:11:57.664  "compare_and_write": false,
00:11:57.664  "abort": true,
00:11:57.664  "nvme_admin": false,
00:11:57.664  "nvme_io": false
00:11:57.664  },
00:11:57.664  "driver_specific": {
00:11:57.664  "gpt": {
00:11:57.664  "base_bdev": "Nvme0n1",
00:11:57.664  "offset_blocks": 2048,
00:11:57.664  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:11:57.664  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:11:57.664  "partition_name": "SPDK_TEST_first"
00:11:57.664  }
00:11:57.664  }
00:11:57.664  }
00:11:57.664  ]'
00:11:57.664    00:42:46	-- bdev/blockdev.sh@620 -- # jq -r length
00:11:57.664   00:42:46	-- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]]
00:11:57.664    00:42:46	-- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]'
00:11:57.664   00:42:46	-- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:11:57.664    00:42:46	-- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:11:57.664   00:42:46	-- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:11:57.664    00:42:46	-- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:11:57.664    00:42:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:11:57.664    00:42:46	-- common/autotest_common.sh@10 -- # set +x
00:11:57.664    00:42:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:11:57.664   00:42:46	-- bdev/blockdev.sh@624 -- # bdev='[
00:11:57.664  {
00:11:57.664  "name": "Nvme0n1p2",
00:11:57.664  "aliases": [
00:11:57.664  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:11:57.664  ],
00:11:57.664  "product_name": "GPT Disk",
00:11:57.664  "block_size": 512,
00:11:57.664  "num_blocks": 3907016703,
00:11:57.664  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:11:57.664  "assigned_rate_limits": {
00:11:57.664  "rw_ios_per_sec": 0,
00:11:57.664  "rw_mbytes_per_sec": 0,
00:11:57.664  "r_mbytes_per_sec": 0,
00:11:57.664  "w_mbytes_per_sec": 0
00:11:57.664  },
00:11:57.664  "claimed": false,
00:11:57.664  "zoned": false,
00:11:57.664  "supported_io_types": {
00:11:57.664  "read": true,
00:11:57.664  "write": true,
00:11:57.664  "unmap": true,
00:11:57.664  "write_zeroes": true,
00:11:57.664  "flush": true,
00:11:57.664  "reset": true,
00:11:57.664  "compare": false,
00:11:57.664  "compare_and_write": false,
00:11:57.664  "abort": true,
00:11:57.664  "nvme_admin": false,
00:11:57.664  "nvme_io": false
00:11:57.664  },
00:11:57.664  "driver_specific": {
00:11:57.664  "gpt": {
00:11:57.664  "base_bdev": "Nvme0n1",
00:11:57.664  "offset_blocks": 3907018752,
00:11:57.664  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:11:57.664  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:11:57.664  "partition_name": "SPDK_TEST_second"
00:11:57.664  }
00:11:57.664  }
00:11:57.664  }
00:11:57.664  ]'
00:11:57.664    00:42:46	-- bdev/blockdev.sh@625 -- # jq -r length
00:11:57.664   00:42:46	-- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]]
00:11:57.664    00:42:46	-- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]'
00:11:57.664   00:42:46	-- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:11:57.664    00:42:46	-- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:11:57.664   00:42:46	-- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:11:57.664   00:42:46	-- bdev/blockdev.sh@629 -- # killprocess 982860
00:11:57.664   00:42:46	-- common/autotest_common.sh@936 -- # '[' -z 982860 ']'
00:11:57.664   00:42:46	-- common/autotest_common.sh@940 -- # kill -0 982860
00:11:57.664    00:42:46	-- common/autotest_common.sh@941 -- # uname
00:11:57.664   00:42:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:11:57.664    00:42:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 982860
00:11:57.665   00:42:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:11:57.665   00:42:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:11:57.665   00:42:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 982860'
00:11:57.665  killing process with pid 982860
00:11:57.665   00:42:46	-- common/autotest_common.sh@955 -- # kill 982860
00:11:57.665   00:42:46	-- common/autotest_common.sh@960 -- # wait 982860
00:12:02.007  
00:12:02.007  real	0m8.461s
00:12:02.007  user	0m7.945s
00:12:02.007  sys	0m0.611s
00:12:02.007   00:42:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:02.007   00:42:51	-- common/autotest_common.sh@10 -- # set +x
00:12:02.007  ************************************
00:12:02.007  END TEST bdev_gpt_uuid
00:12:02.007  ************************************
00:12:02.007   00:42:51	-- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]]
00:12:02.007   00:42:51	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:12:02.007   00:42:51	-- bdev/blockdev.sh@809 -- # cleanup
00:12:02.007   00:42:51	-- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile
00:12:02.007   00:42:51	-- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:12:02.007   00:42:51	-- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]]
00:12:02.007   00:42:51	-- bdev/blockdev.sh@28 -- # [[ gpt == daos ]]
00:12:02.007   00:42:51	-- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]]
00:12:02.007   00:42:51	-- bdev/blockdev.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:12:05.297  Waiting for block devices as requested
00:12:05.297  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:12:05.297  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:12:05.297  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:12:05.297  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:12:05.297  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:12:05.298  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:12:05.298  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:12:05.557  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:12:05.557  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:12:05.557  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:12:05.817  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:12:05.817  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:12:05.817  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:12:06.075  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:12:06.075  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:12:06.075  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:12:06.334  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:12:06.334   00:42:55	-- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]]
00:12:06.334   00:42:55	-- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1
00:12:06.593  /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
00:12:06.593  /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
00:12:06.593  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:12:06.593  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:12:06.593   00:42:55	-- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]]
00:12:06.593  
00:12:06.593  real	1m37.415s
00:12:06.593  user	2m12.559s
00:12:06.593  sys	0m14.152s
00:12:06.593   00:42:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:06.593   00:42:55	-- common/autotest_common.sh@10 -- # set +x
00:12:06.593  ************************************
00:12:06.593  END TEST blockdev_nvme_gpt
00:12:06.593  ************************************
00:12:06.593   00:42:55	-- spdk/autotest.sh@209 -- # run_test nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh
00:12:06.593   00:42:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:06.593   00:42:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:06.593   00:42:55	-- common/autotest_common.sh@10 -- # set +x
00:12:06.593  ************************************
00:12:06.593  START TEST nvme
00:12:06.593  ************************************
00:12:06.593   00:42:55	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh
00:12:06.593  * Looking for test storage...
00:12:06.593  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:12:06.593    00:42:55	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:06.593     00:42:55	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:06.594     00:42:55	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:06.852    00:42:55	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:06.852    00:42:55	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:06.852    00:42:55	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:06.852    00:42:55	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:06.852    00:42:55	-- scripts/common.sh@335 -- # IFS=.-:
00:12:06.852    00:42:55	-- scripts/common.sh@335 -- # read -ra ver1
00:12:06.852    00:42:55	-- scripts/common.sh@336 -- # IFS=.-:
00:12:06.852    00:42:55	-- scripts/common.sh@336 -- # read -ra ver2
00:12:06.852    00:42:55	-- scripts/common.sh@337 -- # local 'op=<'
00:12:06.852    00:42:55	-- scripts/common.sh@339 -- # ver1_l=2
00:12:06.852    00:42:55	-- scripts/common.sh@340 -- # ver2_l=1
00:12:06.853    00:42:55	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:06.853    00:42:55	-- scripts/common.sh@343 -- # case "$op" in
00:12:06.853    00:42:55	-- scripts/common.sh@344 -- # : 1
00:12:06.853    00:42:55	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:06.853    00:42:55	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:06.853     00:42:55	-- scripts/common.sh@364 -- # decimal 1
00:12:06.853     00:42:55	-- scripts/common.sh@352 -- # local d=1
00:12:06.853     00:42:55	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:06.853     00:42:55	-- scripts/common.sh@354 -- # echo 1
00:12:06.853    00:42:55	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:06.853     00:42:55	-- scripts/common.sh@365 -- # decimal 2
00:12:06.853     00:42:55	-- scripts/common.sh@352 -- # local d=2
00:12:06.853     00:42:55	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:06.853     00:42:55	-- scripts/common.sh@354 -- # echo 2
00:12:06.853    00:42:55	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:06.853    00:42:55	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:06.853    00:42:55	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:06.853    00:42:55	-- scripts/common.sh@367 -- # return 0
00:12:06.853    00:42:55	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:06.853    00:42:55	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:06.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.853  		--rc genhtml_branch_coverage=1
00:12:06.853  		--rc genhtml_function_coverage=1
00:12:06.853  		--rc genhtml_legend=1
00:12:06.853  		--rc geninfo_all_blocks=1
00:12:06.853  		--rc geninfo_unexecuted_blocks=1
00:12:06.853  		
00:12:06.853  		'
00:12:06.853    00:42:55	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:06.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.853  		--rc genhtml_branch_coverage=1
00:12:06.853  		--rc genhtml_function_coverage=1
00:12:06.853  		--rc genhtml_legend=1
00:12:06.853  		--rc geninfo_all_blocks=1
00:12:06.853  		--rc geninfo_unexecuted_blocks=1
00:12:06.853  		
00:12:06.853  		'
00:12:06.853    00:42:55	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:06.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.853  		--rc genhtml_branch_coverage=1
00:12:06.853  		--rc genhtml_function_coverage=1
00:12:06.853  		--rc genhtml_legend=1
00:12:06.853  		--rc geninfo_all_blocks=1
00:12:06.853  		--rc geninfo_unexecuted_blocks=1
00:12:06.853  		
00:12:06.853  		'
00:12:06.853    00:42:55	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:06.853  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.853  		--rc genhtml_branch_coverage=1
00:12:06.853  		--rc genhtml_function_coverage=1
00:12:06.853  		--rc genhtml_legend=1
00:12:06.853  		--rc geninfo_all_blocks=1
00:12:06.853  		--rc geninfo_unexecuted_blocks=1
00:12:06.853  		
00:12:06.853  		'
00:12:06.853   00:42:55	-- nvme/nvme.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:12:10.145  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:12:10.145  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:12:13.438  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:12:13.438    00:43:02	-- nvme/nvme.sh@79 -- # uname
00:12:13.438   00:43:02	-- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:12:13.438   00:43:02	-- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:12:13.438   00:43:02	-- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:12:13.438   00:43:02	-- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:12:13.438   00:43:02	-- common/autotest_common.sh@1054 -- # _randomize_va_space=2
00:12:13.438   00:43:02	-- common/autotest_common.sh@1055 -- # echo 0
00:12:13.438   00:43:02	-- common/autotest_common.sh@1057 -- # stubpid=986749
00:12:13.438   00:43:02	-- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes...
00:12:13.438  Waiting for stub to ready for secondary processes...
00:12:13.438   00:43:02	-- common/autotest_common.sh@1056 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:12:13.438   00:43:02	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:12:13.438   00:43:02	-- common/autotest_common.sh@1061 -- # [[ -e /proc/986749 ]]
00:12:13.438   00:43:02	-- common/autotest_common.sh@1062 -- # sleep 1s
00:12:13.438  [2024-12-17 00:43:02.520460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:13.438  [2024-12-17 00:43:02.520519] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:13.438  EAL: No free 2048 kB hugepages reported on node 1
00:12:14.376   00:43:03	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:12:14.376   00:43:03	-- common/autotest_common.sh@1061 -- # [[ -e /proc/986749 ]]
00:12:14.376   00:43:03	-- common/autotest_common.sh@1062 -- # sleep 1s
00:12:14.376  [2024-12-17 00:43:03.608407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:14.635  [2024-12-17 00:43:03.640051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:12:14.635  [2024-12-17 00:43:03.640152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:12:14.635  [2024-12-17 00:43:03.640153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:15.572   00:43:04	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:12:15.572   00:43:04	-- common/autotest_common.sh@1061 -- # [[ -e /proc/986749 ]]
00:12:15.572   00:43:04	-- common/autotest_common.sh@1062 -- # sleep 1s
00:12:16.507   00:43:05	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:12:16.507   00:43:05	-- common/autotest_common.sh@1061 -- # [[ -e /proc/986749 ]]
00:12:16.507   00:43:05	-- common/autotest_common.sh@1062 -- # sleep 1s
00:12:17.444   00:43:06	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:12:17.444   00:43:06	-- common/autotest_common.sh@1061 -- # [[ -e /proc/986749 ]]
00:12:17.444   00:43:06	-- common/autotest_common.sh@1062 -- # sleep 1s
00:12:17.444  [2024-12-17 00:43:06.648978] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:12:17.444  [2024-12-17 00:43:06.665161] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:12:17.444  [2024-12-17 00:43:06.665306] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:12:18.381   00:43:07	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:12:18.381   00:43:07	-- common/autotest_common.sh@1064 -- # echo done.
00:12:18.381  done.
00:12:18.381   00:43:07	-- nvme/nvme.sh@84 -- # run_test nvme_reset /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:12:18.381   00:43:07	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:12:18.381   00:43:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:18.381   00:43:07	-- common/autotest_common.sh@10 -- # set +x
00:12:18.381  ************************************
00:12:18.381  START TEST nvme_reset
00:12:18.381  ************************************
00:12:18.381   00:43:07	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:12:18.641  [2024-12-17 00:43:07.805253] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805332] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805353] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805371] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805387] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805404] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805421] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805437] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805453] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805469] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805485] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805501] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805517] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805534] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805550] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805567] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805583] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805600] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805616] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805632] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805649] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805665] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805681] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805697] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805713] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805730] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805747] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805763] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805779] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805796] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805812] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805829] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805854] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805871] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805897] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805914] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805931] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805947] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805964] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805980] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.805997] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806013] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806029] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806045] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806062] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806078] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806094] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806110] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806127] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806143] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806160] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806176] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806192] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806209] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806225] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806241] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806258] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806275] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806291] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806307] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806324] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806340] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806356] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:18.641  [2024-12-17 00:43:07.806373] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.912  [2024-12-17 00:43:12.821482] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821559] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821578] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821596] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821613] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821629] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821645] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821661] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821678] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821694] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821710] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821727] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821743] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821759] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821776] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821792] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821808] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821824] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821839] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821855] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821871] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821892] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821908] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821924] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821940] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821956] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821972] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.821988] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822004] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822020] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822038] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822054] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822070] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822086] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822109] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822128] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822145] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822161] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822177] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822194] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822210] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822227] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822243] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822259] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822276] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822292] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822308] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822324] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822340] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822356] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822372] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822388] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822404] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822419] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822435] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822451] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822467] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822483] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822499] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822515] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822531] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822547] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822563] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:23.913  [2024-12-17 00:43:12.822579] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837574] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837627] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837646] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837663] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837685] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837702] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837718] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837734] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837751] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837767] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837784] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837800] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837817] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837833] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837850] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837866] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837882] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837902] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837918] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837934] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837950] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837966] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837982] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.837998] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838014] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838030] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838046] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838062] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838078] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838094] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838110] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838127] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838143] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838159] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838179] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838195] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.186  [2024-12-17 00:43:17.838211] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838235] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838252] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838268] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838285] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838301] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838318] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838334] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838351] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838367] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838384] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838400] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838416] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838432] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838448] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838464] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838479] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838495] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838511] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838527] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838543] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838559] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838574] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838590] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838607] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838623] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838639] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.187  [2024-12-17 00:43:17.838655] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:34.464  Initializing NVMe Controllers
00:12:34.464  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 0
00:12:34.464  Initialization complete. Launching workers.
00:12:34.464  Starting thread on core 0
00:12:34.464  ========================================================
00:12:34.464            647360 IO completed successfully
00:12:34.464                64 IO completed with error
00:12:34.464  --------------------------------------------------------
00:12:34.464            647424 IO completed total
00:12:34.464            647424 IO submitted
00:12:34.464  Starting thread on core 0
00:12:34.464  ========================================================
00:12:34.464            646976 IO completed successfully
00:12:34.464                64 IO completed with error
00:12:34.464  --------------------------------------------------------
00:12:34.464            647040 IO completed total
00:12:34.464            647040 IO submitted
00:12:34.464  Starting thread on core 0
00:12:34.464  ========================================================
00:12:34.464            647360 IO completed successfully
00:12:34.464                64 IO completed with error
00:12:34.464  --------------------------------------------------------
00:12:34.464            647424 IO completed total
00:12:34.464            647424 IO submitted
00:12:34.464  
00:12:34.464  real	0m15.346s
00:12:34.464  user	0m15.065s
00:12:34.464  sys	0m0.179s
00:12:34.464   00:43:22	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:34.464   00:43:22	-- common/autotest_common.sh@10 -- # set +x
00:12:34.464  ************************************
00:12:34.464  END TEST nvme_reset
00:12:34.464  ************************************
00:12:34.464   00:43:22	-- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:12:34.464   00:43:22	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:34.464   00:43:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:34.464   00:43:22	-- common/autotest_common.sh@10 -- # set +x
00:12:34.464  ************************************
00:12:34.464  START TEST nvme_identify
00:12:34.464  ************************************
00:12:34.464   00:43:22	-- common/autotest_common.sh@1114 -- # nvme_identify
00:12:34.464   00:43:22	-- nvme/nvme.sh@12 -- # bdfs=()
00:12:34.464   00:43:22	-- nvme/nvme.sh@12 -- # local bdfs bdf
00:12:34.464   00:43:22	-- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:12:34.464    00:43:22	-- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:12:34.464    00:43:22	-- common/autotest_common.sh@1508 -- # bdfs=()
00:12:34.464    00:43:22	-- common/autotest_common.sh@1508 -- # local bdfs
00:12:34.464    00:43:22	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:34.464     00:43:22	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:12:34.465     00:43:22	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:12:34.465    00:43:23	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:12:34.465    00:43:23	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:12:34.465   00:43:23	-- nvme/nvme.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -i 0
00:12:34.465  =====================================================
00:12:34.465  NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:34.465  =====================================================
00:12:34.465  Controller Capabilities/Features
00:12:34.465  ================================
00:12:34.465  Vendor ID:                             8086
00:12:34.465  Subsystem Vendor ID:                   8086
00:12:34.465  Serial Number:                         BTLJ83030AK84P0DGN
00:12:34.465  Model Number:                          INTEL SSDPE2KX040T8
00:12:34.465  Firmware Version:                      VDV10184
00:12:34.465  Recommended Arb Burst:                 0
00:12:34.465  IEEE OUI Identifier:                   e4 d2 5c
00:12:34.465  Multi-path I/O
00:12:34.465    May have multiple subsystem ports:   No
00:12:34.465    May have multiple controllers:       No
00:12:34.465    Associated with SR-IOV VF:           No
00:12:34.465  Max Data Transfer Size:                131072
00:12:34.465  Max Number of Namespaces:              128
00:12:34.465  Max Number of I/O Queues:              128
00:12:34.465  NVMe Specification Version (VS):       1.2
00:12:34.465  NVMe Specification Version (Identify): 1.2
00:12:34.465  Maximum Queue Entries:                 4096
00:12:34.465  Contiguous Queues Required:            Yes
00:12:34.465  Arbitration Mechanisms Supported
00:12:34.465    Weighted Round Robin:                Supported
00:12:34.465    Vendor Specific:                     Not Supported
00:12:34.465  Reset Timeout:                         60000 ms
00:12:34.465  Doorbell Stride:                       4 bytes
00:12:34.465  NVM Subsystem Reset:                   Not Supported
00:12:34.465  Command Sets Supported
00:12:34.465    NVM Command Set:                     Supported
00:12:34.465  Boot Partition:                        Not Supported
00:12:34.465  Memory Page Size Minimum:              4096 bytes
00:12:34.465  Memory Page Size Maximum:              4096 bytes
00:12:34.465  Persistent Memory Region:              Not Supported
00:12:34.465  Optional Asynchronous Events Supported
00:12:34.465    Namespace Attribute Notices:         Not Supported
00:12:34.465    Firmware Activation Notices:         Supported
00:12:34.465    ANA Change Notices:                  Not Supported
00:12:34.465    PLE Aggregate Log Change Notices:    Not Supported
00:12:34.465    LBA Status Info Alert Notices:       Not Supported
00:12:34.465    EGE Aggregate Log Change Notices:    Not Supported
00:12:34.465    Normal NVM Subsystem Shutdown event: Not Supported
00:12:34.465    Zone Descriptor Change Notices:      Not Supported
00:12:34.465    Discovery Log Change Notices:        Not Supported
00:12:34.465  Controller Attributes
00:12:34.465    128-bit Host Identifier:             Not Supported
00:12:34.465    Non-Operational Permissive Mode:     Not Supported
00:12:34.465    NVM Sets:                            Not Supported
00:12:34.465    Read Recovery Levels:                Not Supported
00:12:34.465    Endurance Groups:                    Not Supported
00:12:34.465    Predictable Latency Mode:            Not Supported
00:12:34.465    Traffic Based Keep ALive:            Not Supported
00:12:34.465    Namespace Granularity:               Not Supported
00:12:34.465    SQ Associations:                     Not Supported
00:12:34.465    UUID List:                           Not Supported
00:12:34.465    Multi-Domain Subsystem:              Not Supported
00:12:34.465    Fixed Capacity Management:           Not Supported
00:12:34.465    Variable Capacity Management:        Not Supported
00:12:34.465    Delete Endurance Group:              Not Supported
00:12:34.465    Delete NVM Set:                      Not Supported
00:12:34.465    Extended LBA Formats Supported:      Not Supported
00:12:34.465    Flexible Data Placement Supported:   Not Supported
00:12:34.465  
00:12:34.465  Controller Memory Buffer Support
00:12:34.465  ================================
00:12:34.465  Supported:                             No
00:12:34.465  
00:12:34.465  Persistent Memory Region Support
00:12:34.465  ================================
00:12:34.465  Supported:                             No
00:12:34.465  
00:12:34.465  Admin Command Set Attributes
00:12:34.465  ============================
00:12:34.465  Security Send/Receive:                 Not Supported
00:12:34.465  Format NVM:                            Supported
00:12:34.465  Firmware Activate/Download:            Supported
00:12:34.465  Namespace Management:                  Supported
00:12:34.465  Device Self-Test:                      Not Supported
00:12:34.465  Directives:                            Not Supported
00:12:34.465  NVMe-MI:                               Not Supported
00:12:34.465  Virtualization Management:             Not Supported
00:12:34.465  Doorbell Buffer Config:                Not Supported
00:12:34.465  Get LBA Status Capability:             Not Supported
00:12:34.465  Command & Feature Lockdown Capability: Not Supported
00:12:34.465  Abort Command Limit:                   4
00:12:34.465  Async Event Request Limit:             4
00:12:34.465  Number of Firmware Slots:              4
00:12:34.465  Firmware Slot 1 Read-Only:             No
00:12:34.465  Firmware Activation Without Reset:     Yes
00:12:34.465  Multiple Update Detection Support:     No
00:12:34.465  Firmware Update Granularity:           No Information Provided
00:12:34.465  Per-Namespace SMART Log:               No
00:12:34.465  Asymmetric Namespace Access Log Page:  Not Supported
00:12:34.465  Subsystem NQN:                         
00:12:34.465  Command Effects Log Page:              Supported
00:12:34.465  Get Log Page Extended Data:            Supported
00:12:34.465  Telemetry Log Pages:                   Supported
00:12:34.465  Persistent Event Log Pages:            Not Supported
00:12:34.465  Supported Log Pages Log Page:          May Support
00:12:34.465  Commands Supported & Effects Log Page: Not Supported
00:12:34.465  Feature Identifiers & Effects Log Page:May Support
00:12:34.465  NVMe-MI Commands & Effects Log Page:   May Support
00:12:34.465  Data Area 4 for Telemetry Log:         Not Supported
00:12:34.465  Error Log Page Entries Supported:      64
00:12:34.465  Keep Alive:                            Not Supported
00:12:34.465  
00:12:34.465  NVM Command Set Attributes
00:12:34.465  ==========================
00:12:34.465  Submission Queue Entry Size
00:12:34.465    Max:                       64
00:12:34.465    Min:                       64
00:12:34.465  Completion Queue Entry Size
00:12:34.465    Max:                       16
00:12:34.465    Min:                       16
00:12:34.465  Number of Namespaces:        128
00:12:34.465  Compare Command:             Not Supported
00:12:34.465  Write Uncorrectable Command: Supported
00:12:34.465  Dataset Management Command:  Supported
00:12:34.465  Write Zeroes Command:        Not Supported
00:12:34.465  Set Features Save Field:     Not Supported
00:12:34.465  Reservations:                Not Supported
00:12:34.465  Timestamp:                   Not Supported
00:12:34.465  Copy:                        Not Supported
00:12:34.465  Volatile Write Cache:        Not Present
00:12:34.465  Atomic Write Unit (Normal):  1
00:12:34.465  Atomic Write Unit (PFail):   1
00:12:34.465  Atomic Compare & Write Unit: 1
00:12:34.465  Fused Compare & Write:       Not Supported
00:12:34.465  Scatter-Gather List
00:12:34.465    SGL Command Set:           Not Supported
00:12:34.465    SGL Keyed:                 Not Supported
00:12:34.465    SGL Bit Bucket Descriptor: Not Supported
00:12:34.465    SGL Metadata Pointer:      Not Supported
00:12:34.465    Oversized SGL:             Not Supported
00:12:34.465    SGL Metadata Address:      Not Supported
00:12:34.465    SGL Offset:                Not Supported
00:12:34.465    Transport SGL Data Block:  Not Supported
00:12:34.465  Replay Protected Memory Block:  Not Supported
00:12:34.465  
00:12:34.465  Firmware Slot Information
00:12:34.465  =========================
00:12:34.465  Active slot:                 1
00:12:34.465  Slot 1 Firmware Revision:    VDV10184
00:12:34.465  
00:12:34.465  
00:12:34.465  Commands Supported and Effects
00:12:34.465  ==============================
00:12:34.465  Admin Commands
00:12:34.465  --------------
00:12:34.465     Delete I/O Submission Queue (00h): Supported 
00:12:34.465     Create I/O Submission Queue (01h): Supported All-NS-Exclusive
00:12:34.465                    Get Log Page (02h): Supported 
00:12:34.465     Delete I/O Completion Queue (04h): Supported 
00:12:34.465     Create I/O Completion Queue (05h): Supported All-NS-Exclusive
00:12:34.465                        Identify (06h): Supported 
00:12:34.465                           Abort (08h): Supported 
00:12:34.465                    Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 
00:12:34.465                    Get Features (0Ah): Supported 
00:12:34.465      Asynchronous Event Request (0Ch): Supported 
00:12:34.465            Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive
00:12:34.465                 Firmware Commit (10h): Supported Ctrlr-Cap-Change 
00:12:34.465         Firmware Image Download (11h): Supported 
00:12:34.465            Namespace Attachment (15h): Supported Per-NS-Exclusive
00:12:34.465                      Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive
00:12:34.465                 Vendor specific (C8h): Supported 
00:12:34.465                 Vendor specific (D2h): Supported 
00:12:34.465                 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:12:34.465                 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:12:34.465  I/O Commands
00:12:34.465  ------------
00:12:34.465                           Flush (00h): Supported LBA-Change 
00:12:34.465                           Write (01h): Supported LBA-Change 
00:12:34.465                            Read (02h): Supported 
00:12:34.465             Write Uncorrectable (04h): Supported LBA-Change 
00:12:34.465              Dataset Management (09h): Supported LBA-Change 
00:12:34.465  
00:12:34.465  Error Log
00:12:34.465  =========
00:12:34.465  Entry: 0
00:12:34.465  Error Count:            0x978e
00:12:34.465  Submission Queue Id:    0x2
00:12:34.465  Command Id:             0xffff
00:12:34.465  Phase Bit:              0
00:12:34.465  Status Code:            0x6
00:12:34.465  Status Code Type:       0x0
00:12:34.465  Do Not Retry:           1
00:12:34.465  Error Location:         0xffff
00:12:34.465  LBA:                    0x0
00:12:34.465  Namespace:              0xffffffff
00:12:34.465  Vendor Log Page:        0x0
00:12:34.465  -----------
00:12:34.465  Entry: 1
00:12:34.465  Error Count:            0x978d
00:12:34.465  Submission Queue Id:    0x2
00:12:34.465  Command Id:             0xffff
00:12:34.465  Phase Bit:              0
00:12:34.465  Status Code:            0x6
00:12:34.465  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 2
00:12:34.466  Error Count:            0x978c
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 3
00:12:34.466  Error Count:            0x978b
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 4
00:12:34.466  Error Count:            0x978a
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 5
00:12:34.466  Error Count:            0x9789
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 6
00:12:34.466  Error Count:            0x9788
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 7
00:12:34.466  Error Count:            0x9787
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 8
00:12:34.466  Error Count:            0x9786
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 9
00:12:34.466  Error Count:            0x9785
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 10
00:12:34.466  Error Count:            0x9784
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 11
00:12:34.466  Error Count:            0x9783
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 12
00:12:34.466  Error Count:            0x9782
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 13
00:12:34.466  Error Count:            0x9781
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 14
00:12:34.466  Error Count:            0x9780
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 15
00:12:34.466  Error Count:            0x977f
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 16
00:12:34.466  Error Count:            0x977e
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 17
00:12:34.466  Error Count:            0x977d
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 18
00:12:34.466  Error Count:            0x977c
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 19
00:12:34.466  Error Count:            0x977b
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 20
00:12:34.466  Error Count:            0x977a
00:12:34.466  Submission Queue Id:    0x0
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.466  Status Code:            0x6
00:12:34.466  Status Code Type:       0x0
00:12:34.466  Do Not Retry:           1
00:12:34.466  Error Location:         0xffff
00:12:34.466  LBA:                    0x0
00:12:34.466  Namespace:              0xffffffff
00:12:34.466  Vendor Log Page:        0x0
00:12:34.466  -----------
00:12:34.466  Entry: 21
00:12:34.466  Error Count:            0x9779
00:12:34.466  Submission Queue Id:    0x2
00:12:34.466  Command Id:             0xffff
00:12:34.466  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 22
00:12:34.467  Error Count:            0x9778
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 23
00:12:34.467  Error Count:            0x9777
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 24
00:12:34.467  Error Count:            0x9776
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 25
00:12:34.467  Error Count:            0x9775
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 26
00:12:34.467  Error Count:            0x9774
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 27
00:12:34.467  Error Count:            0x9773
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 28
00:12:34.467  Error Count:            0x9772
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 29
00:12:34.467  Error Count:            0x9771
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 30
00:12:34.467  Error Count:            0x9770
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 31
00:12:34.467  Error Count:            0x976f
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 32
00:12:34.467  Error Count:            0x976e
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 33
00:12:34.467  Error Count:            0x976d
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 34
00:12:34.467  Error Count:            0x976c
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 35
00:12:34.467  Error Count:            0x976b
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 36
00:12:34.467  Error Count:            0x976a
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 37
00:12:34.467  Error Count:            0x9769
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 38
00:12:34.467  Error Count:            0x9768
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 39
00:12:34.467  Error Count:            0x9767
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 40
00:12:34.467  Error Count:            0x9766
00:12:34.467  Submission Queue Id:    0x2
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.467  Do Not Retry:           1
00:12:34.467  Error Location:         0xffff
00:12:34.467  LBA:                    0x0
00:12:34.467  Namespace:              0xffffffff
00:12:34.467  Vendor Log Page:        0x0
00:12:34.467  -----------
00:12:34.467  Entry: 41
00:12:34.467  Error Count:            0x9765
00:12:34.467  Submission Queue Id:    0x0
00:12:34.467  Command Id:             0xffff
00:12:34.467  Phase Bit:              0
00:12:34.467  Status Code:            0x6
00:12:34.467  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 42
00:12:34.468  Error Count:            0x9764
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 43
00:12:34.468  Error Count:            0x9763
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 44
00:12:34.468  Error Count:            0x9762
00:12:34.468  Submission Queue Id:    0x0
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 45
00:12:34.468  Error Count:            0x9761
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 46
00:12:34.468  Error Count:            0x9760
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 47
00:12:34.468  Error Count:            0x975f
00:12:34.468  Submission Queue Id:    0x0
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 48
00:12:34.468  Error Count:            0x975e
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 49
00:12:34.468  Error Count:            0x975d
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 50
00:12:34.468  Error Count:            0x975c
00:12:34.468  Submission Queue Id:    0x0
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 51
00:12:34.468  Error Count:            0x975b
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 52
00:12:34.468  Error Count:            0x975a
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 53
00:12:34.468  Error Count:            0x9759
00:12:34.468  Submission Queue Id:    0x0
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 54
00:12:34.468  Error Count:            0x9758
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 55
00:12:34.468  Error Count:            0x9757
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 56
00:12:34.468  Error Count:            0x9756
00:12:34.468  Submission Queue Id:    0x0
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 57
00:12:34.468  Error Count:            0x9755
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 58
00:12:34.468  Error Count:            0x9754
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 59
00:12:34.468  Error Count:            0x9753
00:12:34.468  Submission Queue Id:    0x0
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.468  Entry: 60
00:12:34.468  Error Count:            0x9752
00:12:34.468  Submission Queue Id:    0x2
00:12:34.468  Command Id:             0xffff
00:12:34.468  Phase Bit:              0
00:12:34.468  Status Code:            0x6
00:12:34.468  Status Code Type:       0x0
00:12:34.468  Do Not Retry:           1
00:12:34.468  Error Location:         0xffff
00:12:34.468  LBA:                    0x0
00:12:34.468  Namespace:              0xffffffff
00:12:34.468  Vendor Log Page:        0x0
00:12:34.468  -----------
00:12:34.469  Entry: 61
00:12:34.469  Error Count:            0x9751
00:12:34.469  Submission Queue Id:    0x2
00:12:34.469  Command Id:             0xffff
00:12:34.469  Phase Bit:              0
00:12:34.469  Status Code:            0x6
00:12:34.469  Status Code Type:       0x0
00:12:34.469  Do Not Retry:           1
00:12:34.469  Error Location:         0xffff
00:12:34.469  LBA:                    0x0
00:12:34.469  Namespace:              0xffffffff
00:12:34.469  Vendor Log Page:        0x0
00:12:34.469  -----------
00:12:34.469  Entry: 62
00:12:34.469  Error Count:            0x9750
00:12:34.469  Submission Queue Id:    0x0
00:12:34.469  Command Id:             0xffff
00:12:34.469  Phase Bit:              0
00:12:34.469  Status Code:            0x6
00:12:34.469  Status Code Type:       0x0
00:12:34.469  Do Not Retry:           1
00:12:34.469  Error Location:         0xffff
00:12:34.469  LBA:                    0x0
00:12:34.469  Namespace:              0xffffffff
00:12:34.469  Vendor Log Page:        0x0
00:12:34.469  -----------
00:12:34.469  Entry: 63
00:12:34.469  Error Count:            0x974f
00:12:34.469  Submission Queue Id:    0x2
00:12:34.469  Command Id:             0xffff
00:12:34.469  Phase Bit:              0
00:12:34.469  Status Code:            0x6
00:12:34.469  Status Code Type:       0x0
00:12:34.469  Do Not Retry:           1
00:12:34.469  Error Location:         0xffff
00:12:34.469  LBA:                    0x0
00:12:34.469  Namespace:              0xffffffff
00:12:34.469  Vendor Log Page:        0x0
00:12:34.469  
00:12:34.469  Arbitration
00:12:34.469  ===========
00:12:34.469  Arbitration Burst:           1
00:12:34.469  Low Priority Weight:         1
00:12:34.469  Medium Priority Weight:      1
00:12:34.469  High Priority Weight:        1
00:12:34.469  
00:12:34.469  Power Management
00:12:34.469  ================
00:12:34.469  Number of Power States:          1
00:12:34.469  Current Power State:             Power State #0
00:12:34.469  Power State #0:
00:12:34.469    Max Power:                     20.00 W
00:12:34.469    Non-Operational State:         Operational
00:12:34.469    Entry Latency:                 Not Reported
00:12:34.469    Exit Latency:                  Not Reported
00:12:34.469    Relative Read Throughput:      0
00:12:34.469    Relative Read Latency:         0
00:12:34.469    Relative Write Throughput:     0
00:12:34.469    Relative Write Latency:        0
00:12:34.469    Idle Power:                     Not Reported
00:12:34.469    Active Power:                   Not Reported
00:12:34.469  Non-Operational Permissive Mode: Not Supported
00:12:34.469  
00:12:34.469  Health Information
00:12:34.469  ==================
00:12:34.469  Critical Warnings:
00:12:34.469    Available Spare Space:     OK
00:12:34.469    Temperature:               OK
00:12:34.469    Device Reliability:        OK
00:12:34.469    Read Only:                 No
00:12:34.469    Volatile Memory Backup:    OK
00:12:34.469  Current Temperature:         310 Kelvin (37 Celsius)
00:12:34.469  Temperature Threshold:       343 Kelvin (70 Celsius)
00:12:34.469  Available Spare:             99%
00:12:34.469  Available Spare Threshold:   10%
00:12:34.469  Life Percentage Used:        32%
00:12:34.469  Data Units Read:             631261409
00:12:34.469  Data Units Written:          792625721
00:12:34.469  Host Read Commands:          37095477696
00:12:34.469  Host Write Commands:         43076258009
00:12:34.469  Controller Busy Time:        3927 minutes
00:12:34.469  Power Cycles:                31
00:12:34.469  Power On Hours:              20880 hours
00:12:34.469  Unsafe Shutdowns:            46
00:12:34.469  Unrecoverable Media Errors:  0
00:12:34.469  Lifetime Error Log Entries:  38798
00:12:34.469  Warning Temperature Time:    2211 minutes
00:12:34.469  Critical Temperature Time:   0 minutes
00:12:34.469  
00:12:34.469  Number of Queues
00:12:34.469  ================
00:12:34.469  Number of I/O Submission Queues:      128
00:12:34.469  Number of I/O Completion Queues:      128
00:12:34.469  
00:12:34.469  Intel Health Information
00:12:34.469  ==================
00:12:34.469  Program Fail Count:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 6
00:12:34.469  Erase Fail Count:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 1
00:12:34.469  Wear Leveling Count:
00:12:34.469    Normalized Value : 65
00:12:34.469    Current Raw Value:
00:12:34.469    Min: 308
00:12:34.469    Max: 1772
00:12:34.469    Avg: 1525
00:12:34.469  End to End Error Detection Count:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 0
00:12:34.469  CRC Error Count:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 0
00:12:34.469  Timed Workload, Media Wear:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 65535
00:12:34.469  Timed Workload, Host Read/Write Ratio:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 65535%
00:12:34.469  Timed Workload, Timer:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 65535
00:12:34.469  Thermal Throttle Status:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value:
00:12:34.469    Percentage: 0%
00:12:34.469    Throttling Event Count: 1
00:12:34.469  Retry Buffer Overflow Counter:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 0
00:12:34.469  PLL Lock Loss Count:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 0
00:12:34.469  NAND Bytes Written:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 104756766
00:12:34.469  Host Bytes Written:
00:12:34.469    Normalized Value : 100
00:12:34.469    Current Raw Value: 12094508
00:12:34.469  
00:12:34.469  Intel Temperature Information
00:12:34.469  ==================
00:12:34.469  Current Temperature: 37
00:12:34.469  Overtemp shutdown Flag for last critical component temperature: 0
00:12:34.469  Overtemp shutdown Flag for life critical component temperature: 0
00:12:34.469  Highest temperature: 73
00:12:34.469  Lowest temperature: 21
00:12:34.469  Specified Maximum Operating Temperature: 70
00:12:34.469  Specified Minimum Operating Temperature: 0
00:12:34.469  Estimated offset: 0
00:12:34.469  
00:12:34.469  
00:12:34.469  Intel Marketing Information
00:12:34.469  ==================
00:12:34.469  Marketing Product Information:		Intel(R) SSD DC P4510   Series
00:12:34.469  
00:12:34.469  
00:12:34.469  Active Namespaces
00:12:34.469  =================
00:12:34.469  Namespace ID:1
00:12:34.469  Error Recovery Timeout:                Unlimited
00:12:34.469  Command Set Identifier:                NVM (00h)
00:12:34.469  Deallocate:                            Supported
00:12:34.469  Deallocated/Unwritten Error:           Not Supported
00:12:34.469  Deallocated Read Value:                All 0x00
00:12:34.469  Deallocate in Write Zeroes:            Not Supported
00:12:34.469  Deallocated Guard Field:               0xFFFF
00:12:34.469  Flush:                                 Not Supported
00:12:34.469  Reservation:                           Not Supported
00:12:34.469  Namespace Sharing Capabilities:        Private
00:12:34.469  Size (in LBAs):                        7814037168 (3726GiB)
00:12:34.469  Capacity (in LBAs):                    7814037168 (3726GiB)
00:12:34.469  Utilization (in LBAs):                 7814037168 (3726GiB)
00:12:34.469  NGUID:                                 01000000F76E00000000000000000000
00:12:34.469  EUI64:                                 000000000000F76E
00:12:34.469  Thin Provisioning:                     Not Supported
00:12:34.469  Per-NS Atomic Units:                   No
00:12:34.469  NGUID/EUI64 Never Reused:              No
00:12:34.469  Namespace Write Protected:             No
00:12:34.469  Number of LBA Formats:                 2
00:12:34.469  Current LBA Format:                    LBA Format #00
00:12:34.469  LBA Format #00: Data Size:   512  Metadata Size:     0
00:12:34.469  LBA Format #01: Data Size:  4096  Metadata Size:     0
00:12:34.469  
00:12:34.469   00:43:23	-- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:12:34.469   00:43:23	-- nvme/nvme.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0
00:12:34.469  =====================================================
00:12:34.469  NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:34.469  =====================================================
00:12:34.469  Controller Capabilities/Features
00:12:34.469  ================================
00:12:34.469  Vendor ID:                             8086
00:12:34.469  Subsystem Vendor ID:                   8086
00:12:34.469  Serial Number:                         BTLJ83030AK84P0DGN
00:12:34.469  Model Number:                          INTEL SSDPE2KX040T8
00:12:34.469  Firmware Version:                      VDV10184
00:12:34.470  Recommended Arb Burst:                 0
00:12:34.470  IEEE OUI Identifier:                   e4 d2 5c
00:12:34.470  Multi-path I/O
00:12:34.470    May have multiple subsystem ports:   No
00:12:34.470    May have multiple controllers:       No
00:12:34.470    Associated with SR-IOV VF:           No
00:12:34.470  Max Data Transfer Size:                131072
00:12:34.470  Max Number of Namespaces:              128
00:12:34.470  Max Number of I/O Queues:              128
00:12:34.470  NVMe Specification Version (VS):       1.2
00:12:34.470  NVMe Specification Version (Identify): 1.2
00:12:34.470  Maximum Queue Entries:                 4096
00:12:34.470  Contiguous Queues Required:            Yes
00:12:34.470  Arbitration Mechanisms Supported
00:12:34.470    Weighted Round Robin:                Supported
00:12:34.470    Vendor Specific:                     Not Supported
00:12:34.470  Reset Timeout:                         60000 ms
00:12:34.470  Doorbell Stride:                       4 bytes
00:12:34.470  NVM Subsystem Reset:                   Not Supported
00:12:34.470  Command Sets Supported
00:12:34.470    NVM Command Set:                     Supported
00:12:34.470  Boot Partition:                        Not Supported
00:12:34.470  Memory Page Size Minimum:              4096 bytes
00:12:34.470  Memory Page Size Maximum:              4096 bytes
00:12:34.470  Persistent Memory Region:              Not Supported
00:12:34.470  Optional Asynchronous Events Supported
00:12:34.470    Namespace Attribute Notices:         Not Supported
00:12:34.470    Firmware Activation Notices:         Supported
00:12:34.470    ANA Change Notices:                  Not Supported
00:12:34.470    PLE Aggregate Log Change Notices:    Not Supported
00:12:34.470    LBA Status Info Alert Notices:       Not Supported
00:12:34.470    EGE Aggregate Log Change Notices:    Not Supported
00:12:34.470    Normal NVM Subsystem Shutdown event: Not Supported
00:12:34.470    Zone Descriptor Change Notices:      Not Supported
00:12:34.470    Discovery Log Change Notices:        Not Supported
00:12:34.470  Controller Attributes
00:12:34.470    128-bit Host Identifier:             Not Supported
00:12:34.470    Non-Operational Permissive Mode:     Not Supported
00:12:34.470    NVM Sets:                            Not Supported
00:12:34.470    Read Recovery Levels:                Not Supported
00:12:34.470    Endurance Groups:                    Not Supported
00:12:34.470    Predictable Latency Mode:            Not Supported
00:12:34.470    Traffic Based Keep ALive:            Not Supported
00:12:34.470    Namespace Granularity:               Not Supported
00:12:34.470    SQ Associations:                     Not Supported
00:12:34.470    UUID List:                           Not Supported
00:12:34.470    Multi-Domain Subsystem:              Not Supported
00:12:34.470    Fixed Capacity Management:           Not Supported
00:12:34.470    Variable Capacity Management:        Not Supported
00:12:34.470    Delete Endurance Group:              Not Supported
00:12:34.470    Delete NVM Set:                      Not Supported
00:12:34.470    Extended LBA Formats Supported:      Not Supported
00:12:34.470    Flexible Data Placement Supported:   Not Supported
00:12:34.470  
00:12:34.470  Controller Memory Buffer Support
00:12:34.470  ================================
00:12:34.470  Supported:                             No
00:12:34.470  
00:12:34.470  Persistent Memory Region Support
00:12:34.470  ================================
00:12:34.470  Supported:                             No
00:12:34.470  
00:12:34.470  Admin Command Set Attributes
00:12:34.470  ============================
00:12:34.470  Security Send/Receive:                 Not Supported
00:12:34.470  Format NVM:                            Supported
00:12:34.470  Firmware Activate/Download:            Supported
00:12:34.470  Namespace Management:                  Supported
00:12:34.470  Device Self-Test:                      Not Supported
00:12:34.470  Directives:                            Not Supported
00:12:34.470  NVMe-MI:                               Not Supported
00:12:34.470  Virtualization Management:             Not Supported
00:12:34.470  Doorbell Buffer Config:                Not Supported
00:12:34.470  Get LBA Status Capability:             Not Supported
00:12:34.470  Command & Feature Lockdown Capability: Not Supported
00:12:34.470  Abort Command Limit:                   4
00:12:34.470  Async Event Request Limit:             4
00:12:34.470  Number of Firmware Slots:              4
00:12:34.470  Firmware Slot 1 Read-Only:             No
00:12:34.470  Firmware Activation Without Reset:     Yes
00:12:34.470  Multiple Update Detection Support:     No
00:12:34.470  Firmware Update Granularity:           No Information Provided
00:12:34.470  Per-Namespace SMART Log:               No
00:12:34.470  Asymmetric Namespace Access Log Page:  Not Supported
00:12:34.470  Subsystem NQN:                         
00:12:34.470  Command Effects Log Page:              Supported
00:12:34.470  Get Log Page Extended Data:            Supported
00:12:34.470  Telemetry Log Pages:                   Supported
00:12:34.470  Persistent Event Log Pages:            Not Supported
00:12:34.470  Supported Log Pages Log Page:          May Support
00:12:34.470  Commands Supported & Effects Log Page: Not Supported
00:12:34.470  Feature Identifiers & Effects Log Page:May Support
00:12:34.470  NVMe-MI Commands & Effects Log Page:   May Support
00:12:34.470  Data Area 4 for Telemetry Log:         Not Supported
00:12:34.470  Error Log Page Entries Supported:      64
00:12:34.470  Keep Alive:                            Not Supported
00:12:34.470  
00:12:34.470  NVM Command Set Attributes
00:12:34.470  ==========================
00:12:34.470  Submission Queue Entry Size
00:12:34.470    Max:                       64
00:12:34.470    Min:                       64
00:12:34.470  Completion Queue Entry Size
00:12:34.470    Max:                       16
00:12:34.470    Min:                       16
00:12:34.470  Number of Namespaces:        128
00:12:34.470  Compare Command:             Not Supported
00:12:34.470  Write Uncorrectable Command: Supported
00:12:34.470  Dataset Management Command:  Supported
00:12:34.470  Write Zeroes Command:        Not Supported
00:12:34.470  Set Features Save Field:     Not Supported
00:12:34.470  Reservations:                Not Supported
00:12:34.470  Timestamp:                   Not Supported
00:12:34.470  Copy:                        Not Supported
00:12:34.470  Volatile Write Cache:        Not Present
00:12:34.470  Atomic Write Unit (Normal):  1
00:12:34.470  Atomic Write Unit (PFail):   1
00:12:34.470  Atomic Compare & Write Unit: 1
00:12:34.470  Fused Compare & Write:       Not Supported
00:12:34.470  Scatter-Gather List
00:12:34.470    SGL Command Set:           Not Supported
00:12:34.470    SGL Keyed:                 Not Supported
00:12:34.470    SGL Bit Bucket Descriptor: Not Supported
00:12:34.470    SGL Metadata Pointer:      Not Supported
00:12:34.470    Oversized SGL:             Not Supported
00:12:34.470    SGL Metadata Address:      Not Supported
00:12:34.470    SGL Offset:                Not Supported
00:12:34.470    Transport SGL Data Block:  Not Supported
00:12:34.470  Replay Protected Memory Block:  Not Supported
00:12:34.470  
00:12:34.470  Firmware Slot Information
00:12:34.470  =========================
00:12:34.470  Active slot:                 1
00:12:34.470  Slot 1 Firmware Revision:    VDV10184
00:12:34.470  
00:12:34.470  
00:12:34.470  Commands Supported and Effects
00:12:34.470  ==============================
00:12:34.470  Admin Commands
00:12:34.470  --------------
00:12:34.470     Delete I/O Submission Queue (00h): Supported 
00:12:34.470     Create I/O Submission Queue (01h): Supported All-NS-Exclusive
00:12:34.470                    Get Log Page (02h): Supported 
00:12:34.470     Delete I/O Completion Queue (04h): Supported 
00:12:34.470     Create I/O Completion Queue (05h): Supported All-NS-Exclusive
00:12:34.470                        Identify (06h): Supported 
00:12:34.470                           Abort (08h): Supported 
00:12:34.470                    Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 
00:12:34.470                    Get Features (0Ah): Supported 
00:12:34.470      Asynchronous Event Request (0Ch): Supported 
00:12:34.470            Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive
00:12:34.470                 Firmware Commit (10h): Supported Ctrlr-Cap-Change 
00:12:34.470         Firmware Image Download (11h): Supported 
00:12:34.470            Namespace Attachment (15h): Supported Per-NS-Exclusive
00:12:34.470                      Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive
00:12:34.470                 Vendor specific (C8h): Supported 
00:12:34.470                 Vendor specific (D2h): Supported 
00:12:34.470                 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:12:34.470                 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:12:34.470  I/O Commands
00:12:34.470  ------------
00:12:34.470                           Flush (00h): Supported LBA-Change 
00:12:34.470                           Write (01h): Supported LBA-Change 
00:12:34.470                            Read (02h): Supported 
00:12:34.470             Write Uncorrectable (04h): Supported LBA-Change 
00:12:34.470              Dataset Management (09h): Supported LBA-Change 
00:12:34.470  
00:12:34.470  Error Log
00:12:34.470  =========
00:12:34.470  Entry: 0
00:12:34.470  Error Count:            0x978e
00:12:34.470  Submission Queue Id:    0x2
00:12:34.470  Command Id:             0xffff
00:12:34.470  Phase Bit:              0
00:12:34.470  Status Code:            0x6
00:12:34.470  Status Code Type:       0x0
00:12:34.470  Do Not Retry:           1
00:12:34.470  Error Location:         0xffff
00:12:34.470  LBA:                    0x0
00:12:34.470  Namespace:              0xffffffff
00:12:34.470  Vendor Log Page:        0x0
00:12:34.470  -----------
00:12:34.470  Entry: 1
00:12:34.470  Error Count:            0x978d
00:12:34.470  Submission Queue Id:    0x2
00:12:34.470  Command Id:             0xffff
00:12:34.470  Phase Bit:              0
00:12:34.470  Status Code:            0x6
00:12:34.470  Status Code Type:       0x0
00:12:34.470  Do Not Retry:           1
00:12:34.470  Error Location:         0xffff
00:12:34.470  LBA:                    0x0
00:12:34.470  Namespace:              0xffffffff
00:12:34.470  Vendor Log Page:        0x0
00:12:34.470  -----------
00:12:34.470  Entry: 2
00:12:34.470  Error Count:            0x978c
00:12:34.470  Submission Queue Id:    0x0
00:12:34.470  Command Id:             0xffff
00:12:34.470  Phase Bit:              0
00:12:34.470  Status Code:            0x6
00:12:34.470  Status Code Type:       0x0
00:12:34.470  Do Not Retry:           1
00:12:34.470  Error Location:         0xffff
00:12:34.470  LBA:                    0x0
00:12:34.470  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 3
00:12:34.471  Error Count:            0x978b
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 4
00:12:34.471  Error Count:            0x978a
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 5
00:12:34.471  Error Count:            0x9789
00:12:34.471  Submission Queue Id:    0x0
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 6
00:12:34.471  Error Count:            0x9788
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 7
00:12:34.471  Error Count:            0x9787
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 8
00:12:34.471  Error Count:            0x9786
00:12:34.471  Submission Queue Id:    0x0
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 9
00:12:34.471  Error Count:            0x9785
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 10
00:12:34.471  Error Count:            0x9784
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 11
00:12:34.471  Error Count:            0x9783
00:12:34.471  Submission Queue Id:    0x0
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 12
00:12:34.471  Error Count:            0x9782
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 13
00:12:34.471  Error Count:            0x9781
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 14
00:12:34.471  Error Count:            0x9780
00:12:34.471  Submission Queue Id:    0x0
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 15
00:12:34.471  Error Count:            0x977f
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 16
00:12:34.471  Error Count:            0x977e
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 17
00:12:34.471  Error Count:            0x977d
00:12:34.471  Submission Queue Id:    0x0
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 18
00:12:34.471  Error Count:            0x977c
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 19
00:12:34.471  Error Count:            0x977b
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 20
00:12:34.471  Error Count:            0x977a
00:12:34.471  Submission Queue Id:    0x0
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 21
00:12:34.471  Error Count:            0x9779
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.471  Namespace:              0xffffffff
00:12:34.471  Vendor Log Page:        0x0
00:12:34.471  -----------
00:12:34.471  Entry: 22
00:12:34.471  Error Count:            0x9778
00:12:34.471  Submission Queue Id:    0x2
00:12:34.471  Command Id:             0xffff
00:12:34.471  Phase Bit:              0
00:12:34.471  Status Code:            0x6
00:12:34.471  Status Code Type:       0x0
00:12:34.471  Do Not Retry:           1
00:12:34.471  Error Location:         0xffff
00:12:34.471  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 23
00:12:34.472  Error Count:            0x9777
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 24
00:12:34.472  Error Count:            0x9776
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 25
00:12:34.472  Error Count:            0x9775
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 26
00:12:34.472  Error Count:            0x9774
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 27
00:12:34.472  Error Count:            0x9773
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 28
00:12:34.472  Error Count:            0x9772
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 29
00:12:34.472  Error Count:            0x9771
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 30
00:12:34.472  Error Count:            0x9770
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 31
00:12:34.472  Error Count:            0x976f
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 32
00:12:34.472  Error Count:            0x976e
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 33
00:12:34.472  Error Count:            0x976d
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 34
00:12:34.472  Error Count:            0x976c
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 35
00:12:34.472  Error Count:            0x976b
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 36
00:12:34.472  Error Count:            0x976a
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 37
00:12:34.472  Error Count:            0x9769
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 38
00:12:34.472  Error Count:            0x9768
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 39
00:12:34.472  Error Count:            0x9767
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 40
00:12:34.472  Error Count:            0x9766
00:12:34.472  Submission Queue Id:    0x2
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 41
00:12:34.472  Error Count:            0x9765
00:12:34.472  Submission Queue Id:    0x0
00:12:34.472  Command Id:             0xffff
00:12:34.472  Phase Bit:              0
00:12:34.472  Status Code:            0x6
00:12:34.472  Status Code Type:       0x0
00:12:34.472  Do Not Retry:           1
00:12:34.472  Error Location:         0xffff
00:12:34.472  LBA:                    0x0
00:12:34.472  Namespace:              0xffffffff
00:12:34.472  Vendor Log Page:        0x0
00:12:34.472  -----------
00:12:34.472  Entry: 42
00:12:34.473  Error Count:            0x9764
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 43
00:12:34.473  Error Count:            0x9763
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 44
00:12:34.473  Error Count:            0x9762
00:12:34.473  Submission Queue Id:    0x0
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 45
00:12:34.473  Error Count:            0x9761
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 46
00:12:34.473  Error Count:            0x9760
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 47
00:12:34.473  Error Count:            0x975f
00:12:34.473  Submission Queue Id:    0x0
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 48
00:12:34.473  Error Count:            0x975e
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 49
00:12:34.473  Error Count:            0x975d
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 50
00:12:34.473  Error Count:            0x975c
00:12:34.473  Submission Queue Id:    0x0
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 51
00:12:34.473  Error Count:            0x975b
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 52
00:12:34.473  Error Count:            0x975a
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 53
00:12:34.473  Error Count:            0x9759
00:12:34.473  Submission Queue Id:    0x0
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 54
00:12:34.473  Error Count:            0x9758
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 55
00:12:34.473  Error Count:            0x9757
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 56
00:12:34.473  Error Count:            0x9756
00:12:34.473  Submission Queue Id:    0x0
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 57
00:12:34.473  Error Count:            0x9755
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 58
00:12:34.473  Error Count:            0x9754
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 59
00:12:34.473  Error Count:            0x9753
00:12:34.473  Submission Queue Id:    0x0
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 60
00:12:34.473  Error Count:            0x9752
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 61
00:12:34.473  Error Count:            0x9751
00:12:34.473  Submission Queue Id:    0x2
00:12:34.473  Command Id:             0xffff
00:12:34.473  Phase Bit:              0
00:12:34.473  Status Code:            0x6
00:12:34.473  Status Code Type:       0x0
00:12:34.473  Do Not Retry:           1
00:12:34.473  Error Location:         0xffff
00:12:34.473  LBA:                    0x0
00:12:34.473  Namespace:              0xffffffff
00:12:34.473  Vendor Log Page:        0x0
00:12:34.473  -----------
00:12:34.473  Entry: 62
00:12:34.474  Error Count:            0x9750
00:12:34.474  Submission Queue Id:    0x0
00:12:34.474  Command Id:             0xffff
00:12:34.474  Phase Bit:              0
00:12:34.474  Status Code:            0x6
00:12:34.474  Status Code Type:       0x0
00:12:34.474  Do Not Retry:           1
00:12:34.474  Error Location:         0xffff
00:12:34.474  LBA:                    0x0
00:12:34.474  Namespace:              0xffffffff
00:12:34.474  Vendor Log Page:        0x0
00:12:34.474  -----------
00:12:34.474  Entry: 63
00:12:34.474  Error Count:            0x974f
00:12:34.474  Submission Queue Id:    0x2
00:12:34.474  Command Id:             0xffff
00:12:34.474  Phase Bit:              0
00:12:34.474  Status Code:            0x6
00:12:34.474  Status Code Type:       0x0
00:12:34.474  Do Not Retry:           1
00:12:34.474  Error Location:         0xffff
00:12:34.474  LBA:                    0x0
00:12:34.474  Namespace:              0xffffffff
00:12:34.474  Vendor Log Page:        0x0
00:12:34.474  
00:12:34.474  Arbitration
00:12:34.474  ===========
00:12:34.474  Arbitration Burst:           1
00:12:34.474  Low Priority Weight:         1
00:12:34.474  Medium Priority Weight:      1
00:12:34.474  High Priority Weight:        1
00:12:34.474  
00:12:34.474  Power Management
00:12:34.474  ================
00:12:34.474  Number of Power States:          1
00:12:34.474  Current Power State:             Power State #0
00:12:34.474  Power State #0:
00:12:34.474    Max Power:                     20.00 W
00:12:34.474    Non-Operational State:         Operational
00:12:34.474    Entry Latency:                 Not Reported
00:12:34.474    Exit Latency:                  Not Reported
00:12:34.474    Relative Read Throughput:      0
00:12:34.474    Relative Read Latency:         0
00:12:34.474    Relative Write Throughput:     0
00:12:34.474    Relative Write Latency:        0
00:12:34.474    Idle Power:                     Not Reported
00:12:34.474    Active Power:                   Not Reported
00:12:34.474  Non-Operational Permissive Mode: Not Supported
00:12:34.474  
00:12:34.474  Health Information
00:12:34.474  ==================
00:12:34.474  Critical Warnings:
00:12:34.474    Available Spare Space:     OK
00:12:34.474    Temperature:               OK
00:12:34.474    Device Reliability:        OK
00:12:34.474    Read Only:                 No
00:12:34.474    Volatile Memory Backup:    OK
00:12:34.474  Current Temperature:         310 Kelvin (37 Celsius)
00:12:34.474  Temperature Threshold:       343 Kelvin (70 Celsius)
00:12:34.474  Available Spare:             99%
00:12:34.474  Available Spare Threshold:   10%
00:12:34.474  Life Percentage Used:        32%
00:12:34.474  Data Units Read:             631261409
00:12:34.474  Data Units Written:          792625721
00:12:34.474  Host Read Commands:          37095477696
00:12:34.474  Host Write Commands:         43076258009
00:12:34.474  Controller Busy Time:        3927 minutes
00:12:34.474  Power Cycles:                31
00:12:34.474  Power On Hours:              20880 hours
00:12:34.474  Unsafe Shutdowns:            46
00:12:34.474  Unrecoverable Media Errors:  0
00:12:34.474  Lifetime Error Log Entries:  38798
00:12:34.474  Warning Temperature Time:    2211 minutes
00:12:34.474  Critical Temperature Time:   0 minutes
00:12:34.474  
00:12:34.474  Number of Queues
00:12:34.474  ================
00:12:34.474  Number of I/O Submission Queues:      128
00:12:34.474  Number of I/O Completion Queues:      128
00:12:34.474  
00:12:34.474  Intel Health Information
00:12:34.474  ==================
00:12:34.474  Program Fail Count:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 6
00:12:34.474  Erase Fail Count:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 1
00:12:34.474  Wear Leveling Count:
00:12:34.474    Normalized Value : 65
00:12:34.474    Current Raw Value:
00:12:34.474    Min: 308
00:12:34.474    Max: 1772
00:12:34.474    Avg: 1525
00:12:34.474  End to End Error Detection Count:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 0
00:12:34.474  CRC Error Count:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 0
00:12:34.474  Timed Workload, Media Wear:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 65535
00:12:34.474  Timed Workload, Host Read/Write Ratio:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 65535%
00:12:34.474  Timed Workload, Timer:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 65535
00:12:34.474  Thermal Throttle Status:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value:
00:12:34.474    Percentage: 0%
00:12:34.474    Throttling Event Count: 1
00:12:34.474  Retry Buffer Overflow Counter:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 0
00:12:34.474  PLL Lock Loss Count:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 0
00:12:34.474  NAND Bytes Written:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 104756766
00:12:34.474  Host Bytes Written:
00:12:34.474    Normalized Value : 100
00:12:34.474    Current Raw Value: 12094508
00:12:34.474  
00:12:34.474  Intel Temperature Information
00:12:34.474  ==================
00:12:34.474  Current Temperature: 37
00:12:34.474  Overtemp shutdown Flag for last critical component temperature: 0
00:12:34.474  Overtemp shutdown Flag for life critical component temperature: 0
00:12:34.474  Highest temperature: 73
00:12:34.474  Lowest temperature: 21
00:12:34.474  Specified Maximum Operating Temperature: 70
00:12:34.474  Specified Minimum Operating Temperature: 0
00:12:34.474  Estimated offset: 0
00:12:34.474  
00:12:34.474  
00:12:34.474  Intel Marketing Information
00:12:34.474  ==================
00:12:34.474  Marketing Product Information:		Intel(R) SSD DC P4510   Series
00:12:34.474  
00:12:34.474  
00:12:34.474  Active Namespaces
00:12:34.474  =================
00:12:34.474  Namespace ID:1
00:12:34.474  Error Recovery Timeout:                Unlimited
00:12:34.474  Command Set Identifier:                NVM (00h)
00:12:34.474  Deallocate:                            Supported
00:12:34.474  Deallocated/Unwritten Error:           Not Supported
00:12:34.474  Deallocated Read Value:                All 0x00
00:12:34.474  Deallocate in Write Zeroes:            Not Supported
00:12:34.474  Deallocated Guard Field:               0xFFFF
00:12:34.474  Flush:                                 Not Supported
00:12:34.474  Reservation:                           Not Supported
00:12:34.474  Namespace Sharing Capabilities:        Private
00:12:34.474  Size (in LBAs):                        7814037168 (3726GiB)
00:12:34.474  Capacity (in LBAs):                    7814037168 (3726GiB)
00:12:34.474  Utilization (in LBAs):                 7814037168 (3726GiB)
00:12:34.474  NGUID:                                 01000000F76E00000000000000000000
00:12:34.474  EUI64:                                 000000000000F76E
00:12:34.474  Thin Provisioning:                     Not Supported
00:12:34.474  Per-NS Atomic Units:                   No
00:12:34.474  NGUID/EUI64 Never Reused:              No
00:12:34.474  Namespace Write Protected:             No
00:12:34.474  Number of LBA Formats:                 2
00:12:34.474  Current LBA Format:                    LBA Format #00
00:12:34.474  LBA Format #00: Data Size:   512  Metadata Size:     0
00:12:34.474  LBA Format #01: Data Size:  4096  Metadata Size:     0
00:12:34.474  
00:12:34.474  
00:12:34.474  real	0m0.776s
00:12:34.474  user	0m0.237s
00:12:34.474  sys	0m0.448s
00:12:34.474   00:43:23	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:34.474   00:43:23	-- common/autotest_common.sh@10 -- # set +x
00:12:34.474  ************************************
00:12:34.474  END TEST nvme_identify
00:12:34.474  ************************************
00:12:34.733   00:43:23	-- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:12:34.733   00:43:23	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:34.733   00:43:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:34.733   00:43:23	-- common/autotest_common.sh@10 -- # set +x
00:12:34.733  ************************************
00:12:34.733  START TEST nvme_perf
00:12:34.733  ************************************
00:12:34.733   00:43:23	-- common/autotest_common.sh@1114 -- # nvme_perf
00:12:34.733   00:43:23	-- nvme/nvme.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:12:36.112  Initializing NVMe Controllers
00:12:36.112  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:36.112  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:12:36.112  Initialization complete. Launching workers.
00:12:36.112  ========================================================
00:12:36.112                                                                             Latency(us)
00:12:36.112  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:36.112  PCIE (0000:5e:00.0) NSID 1 from core  0:  104886.71    1229.14    1220.01      69.50    3230.61
00:12:36.112  ========================================================
00:12:36.112  Total                                  :  104886.71    1229.14    1220.01      69.50    3230.61
00:12:36.112  
00:12:36.112  Summary latency data for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:12:36.112  =================================================================================
00:12:36.112    1.00000% :   236.856us
00:12:36.112   10.00000% :   541.384us
00:12:36.112   25.00000% :   790.706us
00:12:36.112   50.00000% :  1189.621us
00:12:36.112   75.00000% :  1638.400us
00:12:36.112   90.00000% :  1951.833us
00:12:36.112   95.00000% :  2108.550us
00:12:36.112   98.00000% :  2265.266us
00:12:36.112   99.00000% :  2364.995us
00:12:36.112   99.50000% :  2478.970us
00:12:36.112   99.90000% :  2692.675us
00:12:36.112   99.99000% :  3006.108us
00:12:36.112   99.99900% :  3191.318us
00:12:36.112   99.99990% :  3234.059us
00:12:36.112   99.99999% :  3234.059us
00:12:36.112  
00:12:36.112  Latency histogram for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:12:36.112  ==============================================================================
00:12:36.112         Range in us     Cumulative    IO count
00:12:36.112     69.454 -    69.899:    0.0010%  (        1)
00:12:36.112     77.023 -    77.468:    0.0019%  (        1)
00:12:36.112     79.694 -    80.139:    0.0029%  (        1)
00:12:36.112     80.139 -    80.584:    0.0038%  (        1)
00:12:36.112     81.475 -    81.920:    0.0048%  (        1)
00:12:36.112     85.482 -    85.927:    0.0057%  (        1)
00:12:36.112     86.372 -    86.817:    0.0067%  (        1)
00:12:36.112     86.817 -    87.263:    0.0076%  (        1)
00:12:36.112     87.708 -    88.153:    0.0086%  (        1)
00:12:36.112     89.489 -    89.934:    0.0114%  (        3)
00:12:36.112     89.934 -    90.379:    0.0133%  (        2)
00:12:36.112     90.379 -    90.824:    0.0153%  (        2)
00:12:36.112     90.824 -    91.270:    0.0162%  (        1)
00:12:36.112     93.941 -    94.386:    0.0172%  (        1)
00:12:36.112     94.386 -    94.831:    0.0181%  (        1)
00:12:36.112     94.831 -    95.277:    0.0191%  (        1)
00:12:36.112     97.057 -    97.503:    0.0200%  (        1)
00:12:36.112     97.948 -    98.393:    0.0210%  (        1)
00:12:36.112     98.838 -    99.283:    0.0219%  (        1)
00:12:36.112    100.174 -   100.619:    0.0238%  (        2)
00:12:36.112    100.619 -   101.064:    0.0248%  (        1)
00:12:36.112    101.064 -   101.510:    0.0267%  (        2)
00:12:36.112    101.510 -   101.955:    0.0286%  (        2)
00:12:36.112    101.955 -   102.400:    0.0296%  (        1)
00:12:36.112    102.845 -   103.290:    0.0305%  (        1)
00:12:36.112    104.181 -   104.626:    0.0324%  (        2)
00:12:36.112    105.517 -   105.962:    0.0334%  (        1)
00:12:36.112    106.407 -   106.852:    0.0343%  (        1)
00:12:36.112    106.852 -   107.297:    0.0353%  (        1)
00:12:36.112    108.188 -   108.633:    0.0362%  (        1)
00:12:36.112    109.078 -   109.523:    0.0381%  (        2)
00:12:36.112    109.523 -   109.969:    0.0391%  (        1)
00:12:36.112    110.414 -   110.859:    0.0429%  (        4)
00:12:36.112    110.859 -   111.304:    0.0448%  (        2)
00:12:36.112    111.304 -   111.750:    0.0458%  (        1)
00:12:36.112    111.750 -   112.195:    0.0477%  (        2)
00:12:36.112    112.195 -   112.640:    0.0486%  (        1)
00:12:36.112    112.640 -   113.085:    0.0505%  (        2)
00:12:36.112    113.085 -   113.530:    0.0524%  (        2)
00:12:36.112    113.530 -   113.976:    0.0572%  (        5)
00:12:36.112    113.976 -   114.866:    0.0591%  (        2)
00:12:36.112    114.866 -   115.757:    0.0610%  (        2)
00:12:36.112    115.757 -   116.647:    0.0620%  (        1)
00:12:36.112    116.647 -   117.537:    0.0648%  (        3)
00:12:36.112    117.537 -   118.428:    0.0686%  (        4)
00:12:36.112    118.428 -   119.318:    0.0753%  (        7)
00:12:36.112    119.318 -   120.209:    0.0763%  (        1)
00:12:36.112    120.209 -   121.099:    0.0791%  (        3)
00:12:36.112    121.099 -   121.990:    0.0810%  (        2)
00:12:36.112    121.990 -   122.880:    0.0848%  (        4)
00:12:36.112    122.880 -   123.770:    0.0868%  (        2)
00:12:36.112    123.770 -   124.661:    0.0896%  (        3)
00:12:36.112    125.551 -   126.442:    0.0944%  (        5)
00:12:36.112    126.442 -   127.332:    0.0953%  (        1)
00:12:36.112    127.332 -   128.223:    0.1001%  (        5)
00:12:36.112    128.223 -   129.113:    0.1030%  (        3)
00:12:36.112    129.113 -   130.003:    0.1077%  (        5)
00:12:36.112    130.003 -   130.894:    0.1125%  (        5)
00:12:36.112    130.894 -   131.784:    0.1192%  (        7)
00:12:36.112    131.784 -   132.675:    0.1258%  (        7)
00:12:36.112    132.675 -   133.565:    0.1335%  (        8)
00:12:36.112    133.565 -   134.456:    0.1382%  (        5)
00:12:36.112    134.456 -   135.346:    0.1420%  (        4)
00:12:36.112    135.346 -   136.237:    0.1497%  (        8)
00:12:36.112    136.237 -   137.127:    0.1554%  (        6)
00:12:36.112    137.127 -   138.017:    0.1592%  (        4)
00:12:36.112    138.017 -   138.908:    0.1678%  (        9)
00:12:36.112    138.908 -   139.798:    0.1735%  (        6)
00:12:36.112    139.798 -   140.689:    0.1783%  (        5)
00:12:36.112    140.689 -   141.579:    0.1850%  (        7)
00:12:36.112    141.579 -   142.470:    0.1888%  (        4)
00:12:36.112    142.470 -   143.360:    0.1926%  (        4)
00:12:36.112    143.360 -   144.250:    0.2002%  (        8)
00:12:36.112    144.250 -   145.141:    0.2040%  (        4)
00:12:36.112    145.141 -   146.031:    0.2097%  (        6)
00:12:36.112    146.031 -   146.922:    0.2212%  (       12)
00:12:36.112    146.922 -   147.812:    0.2259%  (        5)
00:12:36.112    147.812 -   148.703:    0.2317%  (        6)
00:12:36.112    148.703 -   149.593:    0.2364%  (        5)
00:12:36.112    149.593 -   150.483:    0.2412%  (        5)
00:12:36.112    150.483 -   151.374:    0.2441%  (        3)
00:12:36.112    151.374 -   152.264:    0.2488%  (        5)
00:12:36.112    152.264 -   153.155:    0.2507%  (        2)
00:12:36.112    153.155 -   154.045:    0.2555%  (        5)
00:12:36.112    154.045 -   154.936:    0.2612%  (        6)
00:12:36.112    154.936 -   155.826:    0.2641%  (        3)
00:12:36.112    155.826 -   156.717:    0.2736%  (       10)
00:12:36.112    156.717 -   157.607:    0.2822%  (        9)
00:12:36.112    157.607 -   158.497:    0.2879%  (        6)
00:12:36.112    158.497 -   159.388:    0.2965%  (        9)
00:12:36.112    159.388 -   160.278:    0.3022%  (        6)
00:12:36.112    160.278 -   161.169:    0.3089%  (        7)
00:12:36.112    161.169 -   162.059:    0.3146%  (        6)
00:12:36.112    162.059 -   162.950:    0.3203%  (        6)
00:12:36.112    162.950 -   163.840:    0.3260%  (        6)
00:12:36.112    163.840 -   164.730:    0.3289%  (        3)
00:12:36.112    164.730 -   165.621:    0.3346%  (        6)
00:12:36.112    165.621 -   166.511:    0.3423%  (        8)
00:12:36.112    166.511 -   167.402:    0.3432%  (        1)
00:12:36.112    167.402 -   168.292:    0.3518%  (        9)
00:12:36.112    168.292 -   169.183:    0.3594%  (        8)
00:12:36.112    169.183 -   170.073:    0.3632%  (        4)
00:12:36.112    170.073 -   170.963:    0.3661%  (        3)
00:12:36.112    170.963 -   171.854:    0.3718%  (        6)
00:12:36.112    171.854 -   172.744:    0.3785%  (        7)
00:12:36.112    172.744 -   173.635:    0.3804%  (        2)
00:12:36.112    173.635 -   174.525:    0.3852%  (        5)
00:12:36.112    174.525 -   175.416:    0.3956%  (       11)
00:12:36.112    175.416 -   176.306:    0.4090%  (       14)
00:12:36.112    176.306 -   177.197:    0.4147%  (        6)
00:12:36.112    177.197 -   178.087:    0.4204%  (        6)
00:12:36.112    178.087 -   178.977:    0.4300%  (       10)
00:12:36.112    178.977 -   179.868:    0.4376%  (        8)
00:12:36.112    179.868 -   180.758:    0.4433%  (        6)
00:12:36.112    180.758 -   181.649:    0.4500%  (        7)
00:12:36.112    181.649 -   182.539:    0.4557%  (        6)
00:12:36.112    182.539 -   183.430:    0.4643%  (        9)
00:12:36.112    183.430 -   184.320:    0.4748%  (       11)
00:12:36.112    184.320 -   185.210:    0.4833%  (        9)
00:12:36.112    185.210 -   186.101:    0.4929%  (       10)
00:12:36.112    186.101 -   186.991:    0.4976%  (        5)
00:12:36.112    186.991 -   187.882:    0.5062%  (        9)
00:12:36.112    187.882 -   188.772:    0.5158%  (       10)
00:12:36.112    188.772 -   189.663:    0.5224%  (        7)
00:12:36.112    189.663 -   190.553:    0.5348%  (       13)
00:12:36.112    190.553 -   191.443:    0.5434%  (        9)
00:12:36.112    191.443 -   192.334:    0.5520%  (        9)
00:12:36.112    192.334 -   193.224:    0.5615%  (       10)
00:12:36.112    193.224 -   194.115:    0.5644%  (        3)
00:12:36.112    194.115 -   195.005:    0.5739%  (       10)
00:12:36.112    195.005 -   195.896:    0.5825%  (        9)
00:12:36.112    195.896 -   196.786:    0.5958%  (       14)
00:12:36.112    196.786 -   197.677:    0.6025%  (        7)
00:12:36.112    197.677 -   198.567:    0.6063%  (        4)
00:12:36.112    198.567 -   199.457:    0.6140%  (        8)
00:12:36.112    199.457 -   200.348:    0.6206%  (        7)
00:12:36.112    200.348 -   201.238:    0.6264%  (        6)
00:12:36.112    201.238 -   202.129:    0.6359%  (       10)
00:12:36.112    202.129 -   203.019:    0.6473%  (       12)
00:12:36.112    203.019 -   203.910:    0.6550%  (        8)
00:12:36.112    203.910 -   204.800:    0.6635%  (        9)
00:12:36.112    204.800 -   205.690:    0.6702%  (        7)
00:12:36.112    205.690 -   206.581:    0.6778%  (        8)
00:12:36.112    206.581 -   207.471:    0.6845%  (        7)
00:12:36.112    207.471 -   208.362:    0.6902%  (        6)
00:12:36.112    208.362 -   209.252:    0.6959%  (        6)
00:12:36.112    209.252 -   210.143:    0.7017%  (        6)
00:12:36.112    210.143 -   211.033:    0.7064%  (        5)
00:12:36.112    211.033 -   211.923:    0.7236%  (       18)
00:12:36.112    211.923 -   212.814:    0.7322%  (        9)
00:12:36.112    212.814 -   213.704:    0.7417%  (       10)
00:12:36.112    213.704 -   214.595:    0.7512%  (       10)
00:12:36.112    214.595 -   215.485:    0.7608%  (       10)
00:12:36.112    215.485 -   216.376:    0.7684%  (        8)
00:12:36.112    216.376 -   217.266:    0.7817%  (       14)
00:12:36.112    217.266 -   218.157:    0.7875%  (        6)
00:12:36.112    218.157 -   219.047:    0.8018%  (       15)
00:12:36.112    219.047 -   219.937:    0.8142%  (       13)
00:12:36.112    219.937 -   220.828:    0.8256%  (       12)
00:12:36.112    220.828 -   221.718:    0.8380%  (       13)
00:12:36.112    221.718 -   222.609:    0.8494%  (       12)
00:12:36.112    222.609 -   223.499:    0.8628%  (       14)
00:12:36.112    223.499 -   224.390:    0.8704%  (        8)
00:12:36.112    224.390 -   225.280:    0.8761%  (        6)
00:12:36.112    225.280 -   226.170:    0.8847%  (        9)
00:12:36.112    226.170 -   227.061:    0.8962%  (       12)
00:12:36.112    227.061 -   227.951:    0.9085%  (       13)
00:12:36.112    227.951 -   229.732:    0.9333%  (       26)
00:12:36.112    229.732 -   231.513:    0.9553%  (       23)
00:12:36.112    231.513 -   233.294:    0.9762%  (       22)
00:12:36.112    233.294 -   235.075:    0.9934%  (       18)
00:12:36.112    235.075 -   236.856:    1.0201%  (       28)
00:12:36.112    236.856 -   238.637:    1.0420%  (       23)
00:12:36.112    238.637 -   240.417:    1.0668%  (       26)
00:12:36.112    240.417 -   242.198:    1.0868%  (       21)
00:12:36.112    242.198 -   243.979:    1.1145%  (       29)
00:12:36.112    243.979 -   245.760:    1.1316%  (       18)
00:12:36.112    245.760 -   247.541:    1.1516%  (       21)
00:12:36.112    247.541 -   249.322:    1.1755%  (       25)
00:12:36.112    249.322 -   251.103:    1.1993%  (       25)
00:12:36.112    251.103 -   252.883:    1.2184%  (       20)
00:12:36.112    252.883 -   254.664:    1.2413%  (       24)
00:12:36.112    254.664 -   256.445:    1.2594%  (       19)
00:12:36.112    256.445 -   258.226:    1.2804%  (       22)
00:12:36.112    258.226 -   260.007:    1.3118%  (       33)
00:12:36.112    260.007 -   261.788:    1.3309%  (       20)
00:12:36.112    261.788 -   263.569:    1.3614%  (       32)
00:12:36.112    263.569 -   265.350:    1.3957%  (       36)
00:12:36.112    265.350 -   267.130:    1.4262%  (       32)
00:12:36.112    267.130 -   268.911:    1.4539%  (       29)
00:12:36.112    268.911 -   270.692:    1.4758%  (       23)
00:12:36.112    270.692 -   272.473:    1.5025%  (       28)
00:12:36.112    272.473 -   274.254:    1.5263%  (       25)
00:12:36.112    274.254 -   276.035:    1.5578%  (       33)
00:12:36.112    276.035 -   277.816:    1.5864%  (       30)
00:12:36.112    277.816 -   279.597:    1.6274%  (       43)
00:12:36.112    279.597 -   281.377:    1.6588%  (       33)
00:12:36.112    281.377 -   283.158:    1.6903%  (       33)
00:12:36.112    283.158 -   284.939:    1.7208%  (       32)
00:12:36.112    284.939 -   286.720:    1.7475%  (       28)
00:12:36.113    286.720 -   288.501:    1.7770%  (       31)
00:12:36.113    288.501 -   290.282:    1.8085%  (       33)
00:12:36.113    290.282 -   292.063:    1.8476%  (       41)
00:12:36.113    292.063 -   293.843:    1.8915%  (       46)
00:12:36.113    293.843 -   295.624:    1.9191%  (       29)
00:12:36.113    295.624 -   297.405:    1.9544%  (       37)
00:12:36.113    297.405 -   299.186:    1.9839%  (       31)
00:12:36.113    299.186 -   300.967:    2.0163%  (       34)
00:12:36.113    300.967 -   302.748:    2.0430%  (       28)
00:12:36.113    302.748 -   304.529:    2.0688%  (       27)
00:12:36.113    304.529 -   306.310:    2.0974%  (       30)
00:12:36.113    306.310 -   308.090:    2.1222%  (       26)
00:12:36.113    308.090 -   309.871:    2.1632%  (       43)
00:12:36.113    309.871 -   311.652:    2.1956%  (       34)
00:12:36.113    311.652 -   313.433:    2.2375%  (       44)
00:12:36.113    313.433 -   315.214:    2.2804%  (       45)
00:12:36.113    315.214 -   316.995:    2.3252%  (       47)
00:12:36.113    316.995 -   318.776:    2.3662%  (       43)
00:12:36.113    318.776 -   320.557:    2.4005%  (       36)
00:12:36.113    320.557 -   322.337:    2.4387%  (       40)
00:12:36.113    322.337 -   324.118:    2.4739%  (       37)
00:12:36.113    324.118 -   325.899:    2.5102%  (       38)
00:12:36.113    325.899 -   327.680:    2.5407%  (       32)
00:12:36.113    327.680 -   329.461:    2.6007%  (       63)
00:12:36.113    329.461 -   331.242:    2.6370%  (       38)
00:12:36.113    331.242 -   333.023:    2.6799%  (       45)
00:12:36.113    333.023 -   334.803:    2.7256%  (       48)
00:12:36.113    334.803 -   336.584:    2.7657%  (       42)
00:12:36.113    336.584 -   338.365:    2.8114%  (       48)
00:12:36.113    338.365 -   340.146:    2.8496%  (       40)
00:12:36.113    340.146 -   341.927:    2.8848%  (       37)
00:12:36.113    341.927 -   343.708:    2.9401%  (       58)
00:12:36.113    343.708 -   345.489:    2.9830%  (       45)
00:12:36.113    345.489 -   347.270:    3.0336%  (       53)
00:12:36.113    347.270 -   349.050:    3.0755%  (       44)
00:12:36.113    349.050 -   350.831:    3.1194%  (       46)
00:12:36.113    350.831 -   352.612:    3.1566%  (       39)
00:12:36.113    352.612 -   354.393:    3.2080%  (       54)
00:12:36.113    354.393 -   356.174:    3.2528%  (       47)
00:12:36.113    356.174 -   357.955:    3.2986%  (       48)
00:12:36.113    357.955 -   359.736:    3.3453%  (       49)
00:12:36.113    359.736 -   361.517:    3.4025%  (       60)
00:12:36.113    361.517 -   363.297:    3.4607%  (       61)
00:12:36.113    363.297 -   365.078:    3.5064%  (       48)
00:12:36.113    365.078 -   366.859:    3.5531%  (       49)
00:12:36.113    366.859 -   368.640:    3.6037%  (       53)
00:12:36.113    368.640 -   370.421:    3.6437%  (       42)
00:12:36.113    370.421 -   372.202:    3.6942%  (       53)
00:12:36.113    372.202 -   373.983:    3.7543%  (       63)
00:12:36.113    373.983 -   375.763:    3.7848%  (       32)
00:12:36.113    375.763 -   377.544:    3.8353%  (       53)
00:12:36.113    377.544 -   379.325:    3.8868%  (       54)
00:12:36.113    379.325 -   381.106:    3.9326%  (       48)
00:12:36.113    381.106 -   382.887:    3.9907%  (       61)
00:12:36.113    382.887 -   384.668:    4.0527%  (       65)
00:12:36.113    384.668 -   386.449:    4.1137%  (       64)
00:12:36.113    386.449 -   388.230:    4.1814%  (       71)
00:12:36.113    388.230 -   390.010:    4.2329%  (       54)
00:12:36.113    390.010 -   391.791:    4.2825%  (       52)
00:12:36.113    391.791 -   393.572:    4.3339%  (       54)
00:12:36.113    393.572 -   395.353:    4.3969%  (       66)
00:12:36.113    395.353 -   397.134:    4.4483%  (       54)
00:12:36.113    397.134 -   398.915:    4.5103%  (       65)
00:12:36.113    398.915 -   400.696:    4.5685%  (       61)
00:12:36.113    400.696 -   402.477:    4.6238%  (       58)
00:12:36.113    402.477 -   404.257:    4.6733%  (       52)
00:12:36.113    404.257 -   406.038:    4.7363%  (       66)
00:12:36.113    406.038 -   407.819:    4.7868%  (       53)
00:12:36.113    407.819 -   409.600:    4.8402%  (       56)
00:12:36.113    409.600 -   411.381:    4.8888%  (       51)
00:12:36.113    411.381 -   413.162:    4.9508%  (       65)
00:12:36.113    413.162 -   414.943:    5.0108%  (       63)
00:12:36.113    414.943 -   416.723:    5.0623%  (       54)
00:12:36.113    416.723 -   418.504:    5.1300%  (       71)
00:12:36.113    418.504 -   420.285:    5.1910%  (       64)
00:12:36.113    420.285 -   422.066:    5.2511%  (       63)
00:12:36.113    422.066 -   423.847:    5.3045%  (       56)
00:12:36.113    423.847 -   425.628:    5.3760%  (       75)
00:12:36.113    425.628 -   427.409:    5.4436%  (       71)
00:12:36.113    427.409 -   429.190:    5.5113%  (       71)
00:12:36.113    429.190 -   430.970:    5.5752%  (       67)
00:12:36.113    430.970 -   432.751:    5.6438%  (       72)
00:12:36.113    432.751 -   434.532:    5.7077%  (       67)
00:12:36.113    434.532 -   436.313:    5.7582%  (       53)
00:12:36.113    436.313 -   438.094:    5.8212%  (       66)
00:12:36.113    438.094 -   439.875:    5.8774%  (       59)
00:12:36.113    439.875 -   441.656:    5.9346%  (       60)
00:12:36.113    441.656 -   443.437:    5.9918%  (       60)
00:12:36.113    443.437 -   445.217:    6.0595%  (       71)
00:12:36.113    445.217 -   446.998:    6.1329%  (       77)
00:12:36.113    446.998 -   448.779:    6.1977%  (       68)
00:12:36.113    448.779 -   450.560:    6.2692%  (       75)
00:12:36.113    450.560 -   452.341:    6.3312%  (       65)
00:12:36.113    452.341 -   454.122:    6.3999%  (       72)
00:12:36.113    454.122 -   455.903:    6.4685%  (       72)
00:12:36.113    455.903 -   459.464:    6.5905%  (      128)
00:12:36.113    459.464 -   463.026:    6.7278%  (      144)
00:12:36.113    463.026 -   466.588:    6.8689%  (      148)
00:12:36.113    466.588 -   470.150:    7.0119%  (      150)
00:12:36.113    470.150 -   473.711:    7.1482%  (      143)
00:12:36.113    473.711 -   477.273:    7.2788%  (      137)
00:12:36.113    477.273 -   480.835:    7.4095%  (      137)
00:12:36.113    480.835 -   484.397:    7.5639%  (      162)
00:12:36.113    484.397 -   487.958:    7.6888%  (      131)
00:12:36.113    487.958 -   491.520:    7.8461%  (      165)
00:12:36.113    491.520 -   495.082:    8.0101%  (      172)
00:12:36.113    495.082 -   498.643:    8.1702%  (      168)
00:12:36.113    498.643 -   502.205:    8.3123%  (      149)
00:12:36.113    502.205 -   505.767:    8.4705%  (      166)
00:12:36.113    505.767 -   509.329:    8.6603%  (      199)
00:12:36.113    509.329 -   512.890:    8.8214%  (      169)
00:12:36.113    512.890 -   516.452:    8.9691%  (      155)
00:12:36.113    516.452 -   520.014:    9.1160%  (      154)
00:12:36.113    520.014 -   523.576:    9.2837%  (      176)
00:12:36.113    523.576 -   527.137:    9.4515%  (      176)
00:12:36.113    527.137 -   530.699:    9.5984%  (      154)
00:12:36.113    530.699 -   534.261:    9.7452%  (      154)
00:12:36.113    534.261 -   537.823:    9.9034%  (      166)
00:12:36.113    537.823 -   541.384:   10.0817%  (      187)
00:12:36.113    541.384 -   544.946:   10.2400%  (      166)
00:12:36.113    544.946 -   548.508:   10.4173%  (      186)
00:12:36.113    548.508 -   552.070:   10.5832%  (      174)
00:12:36.113    552.070 -   555.631:   10.7729%  (      199)
00:12:36.113    555.631 -   559.193:   10.9502%  (      186)
00:12:36.113    559.193 -   562.755:   11.1361%  (      195)
00:12:36.113    562.755 -   566.317:   11.2934%  (      165)
00:12:36.113    566.317 -   569.878:   11.4774%  (      193)
00:12:36.113    569.878 -   573.440:   11.6585%  (      190)
00:12:36.113    573.440 -   577.002:   11.8206%  (      170)
00:12:36.113    577.002 -   580.563:   12.0389%  (      229)
00:12:36.113    580.563 -   584.125:   12.2182%  (      188)
00:12:36.113    584.125 -   587.687:   12.3869%  (      177)
00:12:36.113    587.687 -   591.249:   12.5862%  (      209)
00:12:36.113    591.249 -   594.810:   12.7635%  (      186)
00:12:36.113    594.810 -   598.372:   12.9542%  (      200)
00:12:36.113    598.372 -   601.934:   13.1591%  (      215)
00:12:36.113    601.934 -   605.496:   13.3479%  (      198)
00:12:36.113    605.496 -   609.057:   13.5786%  (      242)
00:12:36.113    609.057 -   612.619:   13.7607%  (      191)
00:12:36.113    612.619 -   616.181:   13.9676%  (      217)
00:12:36.113    616.181 -   619.743:   14.1659%  (      208)
00:12:36.113    619.743 -   623.304:   14.3766%  (      221)
00:12:36.113    623.304 -   626.866:   14.5691%  (      202)
00:12:36.113    626.866 -   630.428:   14.7722%  (      213)
00:12:36.113    630.428 -   633.990:   14.9724%  (      210)
00:12:36.113    633.990 -   637.551:   15.2203%  (      260)
00:12:36.113    637.551 -   641.113:   15.4310%  (      221)
00:12:36.113    641.113 -   644.675:   15.6321%  (      211)
00:12:36.113    644.675 -   648.237:   15.8295%  (      207)
00:12:36.113    648.237 -   651.798:   16.0487%  (      230)
00:12:36.113    651.798 -   655.360:   16.2794%  (      242)
00:12:36.113    655.360 -   658.922:   16.5035%  (      235)
00:12:36.113    658.922 -   662.483:   16.7199%  (      227)
00:12:36.113    662.483 -   666.045:   16.9296%  (      220)
00:12:36.113    666.045 -   669.607:   17.1346%  (      215)
00:12:36.113    669.607 -   673.169:   17.3644%  (      241)
00:12:36.113    673.169 -   676.730:   17.5874%  (      234)
00:12:36.113    676.730 -   680.292:   17.8029%  (      226)
00:12:36.113    680.292 -   683.854:   18.0327%  (      241)
00:12:36.113    683.854 -   687.416:   18.2643%  (      243)
00:12:36.113    687.416 -   690.977:   18.4903%  (      237)
00:12:36.113    690.977 -   694.539:   18.7238%  (      245)
00:12:36.113    694.539 -   698.101:   18.9555%  (      243)
00:12:36.113    698.101 -   701.663:   19.1891%  (      245)
00:12:36.113    701.663 -   705.224:   19.4036%  (      225)
00:12:36.113    705.224 -   708.786:   19.6190%  (      226)
00:12:36.113    708.786 -   712.348:   19.8507%  (      243)
00:12:36.113    712.348 -   715.910:   20.0633%  (      223)
00:12:36.113    715.910 -   719.471:   20.2873%  (      235)
00:12:36.113    719.471 -   723.033:   20.5276%  (      252)
00:12:36.113    723.033 -   726.595:   20.7659%  (      250)
00:12:36.113    726.595 -   730.157:   21.0310%  (      278)
00:12:36.113    730.157 -   733.718:   21.2636%  (      244)
00:12:36.113    733.718 -   737.280:   21.4762%  (      223)
00:12:36.113    737.280 -   740.842:   21.7193%  (      255)
00:12:36.113    740.842 -   744.403:   21.9719%  (      265)
00:12:36.113    744.403 -   747.965:   22.2131%  (      253)
00:12:36.113    747.965 -   751.527:   22.4495%  (      248)
00:12:36.113    751.527 -   755.089:   22.6946%  (      257)
00:12:36.113    755.089 -   758.650:   22.9424%  (      260)
00:12:36.113    758.650 -   762.212:   23.1779%  (      247)
00:12:36.113    762.212 -   765.774:   23.4029%  (      236)
00:12:36.113    765.774 -   769.336:   23.6231%  (      231)
00:12:36.113    769.336 -   772.897:   23.8548%  (      243)
00:12:36.113    772.897 -   776.459:   24.1236%  (      282)
00:12:36.113    776.459 -   780.021:   24.3763%  (      265)
00:12:36.113    780.021 -   783.583:   24.6270%  (      263)
00:12:36.113    783.583 -   787.144:   24.8615%  (      246)
00:12:36.113    787.144 -   790.706:   25.1008%  (      251)
00:12:36.113    790.706 -   794.268:   25.3658%  (      278)
00:12:36.113    794.268 -   797.830:   25.6013%  (      247)
00:12:36.113    797.830 -   801.391:   25.8502%  (      261)
00:12:36.113    801.391 -   804.953:   26.0999%  (      262)
00:12:36.113    804.953 -   808.515:   26.3364%  (      248)
00:12:36.113    808.515 -   812.077:   26.5652%  (      240)
00:12:36.114    812.077 -   815.638:   26.8226%  (      270)
00:12:36.114    815.638 -   819.200:   27.0828%  (      273)
00:12:36.114    819.200 -   822.762:   27.3450%  (      275)
00:12:36.114    822.762 -   826.323:   27.5576%  (      223)
00:12:36.114    826.323 -   829.885:   27.8150%  (      270)
00:12:36.114    829.885 -   833.447:   28.0562%  (      253)
00:12:36.114    833.447 -   837.009:   28.3136%  (      270)
00:12:36.114    837.009 -   840.570:   28.5691%  (      268)
00:12:36.114    840.570 -   844.132:   28.8341%  (      278)
00:12:36.114    844.132 -   847.694:   29.0620%  (      239)
00:12:36.114    847.694 -   851.256:   29.3232%  (      274)
00:12:36.114    851.256 -   854.817:   29.5596%  (      248)
00:12:36.114    854.817 -   858.379:   29.7923%  (      244)
00:12:36.114    858.379 -   861.941:   30.0068%  (      225)
00:12:36.114    861.941 -   865.503:   30.2842%  (      291)
00:12:36.114    865.503 -   869.064:   30.5206%  (      248)
00:12:36.114    869.064 -   872.626:   30.7952%  (      288)
00:12:36.114    872.626 -   876.188:   31.0183%  (      234)
00:12:36.114    876.188 -   879.750:   31.2232%  (      215)
00:12:36.114    879.750 -   883.311:   31.4482%  (      236)
00:12:36.114    883.311 -   886.873:   31.6799%  (      243)
00:12:36.114    886.873 -   890.435:   31.9621%  (      296)
00:12:36.114    890.435 -   893.997:   32.2062%  (      256)
00:12:36.114    893.997 -   897.558:   32.4550%  (      261)
00:12:36.114    897.558 -   901.120:   32.7143%  (      272)
00:12:36.114    901.120 -   904.682:   32.9717%  (      270)
00:12:36.114    904.682 -   908.243:   33.1938%  (      233)
00:12:36.114    908.243 -   911.805:   33.4293%  (      247)
00:12:36.114    911.805 -   918.929:   33.9327%  (      528)
00:12:36.114    918.929 -   926.052:   34.4094%  (      500)
00:12:36.114    926.052 -   933.176:   34.8641%  (      477)
00:12:36.114    933.176 -   940.299:   35.3379%  (      497)
00:12:36.114    940.299 -   947.423:   35.8394%  (      526)
00:12:36.114    947.423 -   954.546:   36.2979%  (      481)
00:12:36.114    954.546 -   961.670:   36.7317%  (      455)
00:12:36.114    961.670 -   968.793:   37.1941%  (      485)
00:12:36.114    968.793 -   975.917:   37.6584%  (      487)
00:12:36.114    975.917 -   983.040:   38.1065%  (      470)
00:12:36.114    983.040 -   990.163:   38.5641%  (      480)
00:12:36.114    990.163 -   997.287:   39.0007%  (      458)
00:12:36.114    997.287 -  1004.410:   39.4459%  (      467)
00:12:36.114   1004.410 -  1011.534:   39.8902%  (      466)
00:12:36.114   1011.534 -  1018.657:   40.3525%  (      485)
00:12:36.114   1018.657 -  1025.781:   40.7920%  (      461)
00:12:36.114   1025.781 -  1032.904:   41.2010%  (      429)
00:12:36.114   1032.904 -  1040.028:   41.6748%  (      497)
00:12:36.114   1040.028 -  1047.151:   42.1058%  (      452)
00:12:36.114   1047.151 -  1054.275:   42.5414%  (      457)
00:12:36.114   1054.275 -  1061.398:   42.9504%  (      429)
00:12:36.114   1061.398 -  1068.522:   43.3756%  (      446)
00:12:36.114   1068.522 -  1075.645:   43.7656%  (      409)
00:12:36.114   1075.645 -  1082.769:   44.1688%  (      423)
00:12:36.114   1082.769 -  1089.892:   44.5816%  (      433)
00:12:36.114   1089.892 -  1097.016:   44.9925%  (      431)
00:12:36.114   1097.016 -  1104.139:   45.3958%  (      423)
00:12:36.114   1104.139 -  1111.263:   45.7981%  (      422)
00:12:36.114   1111.263 -  1118.386:   46.2004%  (      422)
00:12:36.114   1118.386 -  1125.510:   46.5798%  (      398)
00:12:36.114   1125.510 -  1132.633:   46.9593%  (      398)
00:12:36.114   1132.633 -  1139.757:   47.3826%  (      444)
00:12:36.114   1139.757 -  1146.880:   47.7792%  (      416)
00:12:36.114   1146.880 -  1154.003:   48.1729%  (      413)
00:12:36.114   1154.003 -  1161.127:   48.5838%  (      431)
00:12:36.114   1161.127 -  1168.250:   48.9775%  (      413)
00:12:36.114   1168.250 -  1175.374:   49.3951%  (      438)
00:12:36.114   1175.374 -  1182.497:   49.7898%  (      414)
00:12:36.114   1182.497 -  1189.621:   50.1673%  (      396)
00:12:36.114   1189.621 -  1196.744:   50.5639%  (      416)
00:12:36.114   1196.744 -  1203.868:   50.9691%  (      425)
00:12:36.114   1203.868 -  1210.991:   51.3952%  (      447)
00:12:36.114   1210.991 -  1218.115:   51.8519%  (      479)
00:12:36.114   1218.115 -  1225.238:   52.2189%  (      385)
00:12:36.114   1225.238 -  1232.362:   52.6155%  (      416)
00:12:36.114   1232.362 -  1239.485:   52.9873%  (      390)
00:12:36.114   1239.485 -  1246.609:   53.4106%  (      444)
00:12:36.114   1246.609 -  1253.732:   53.7710%  (      378)
00:12:36.114   1253.732 -  1260.856:   54.1371%  (      384)
00:12:36.114   1260.856 -  1267.979:   54.5442%  (      427)
00:12:36.114   1267.979 -  1275.103:   54.9550%  (      431)
00:12:36.114   1275.103 -  1282.226:   55.3354%  (      399)
00:12:36.114   1282.226 -  1289.350:   55.7034%  (      386)
00:12:36.114   1289.350 -  1296.473:   56.1067%  (      423)
00:12:36.114   1296.473 -  1303.597:   56.4842%  (      396)
00:12:36.114   1303.597 -  1310.720:   56.8713%  (      406)
00:12:36.114   1310.720 -  1317.843:   57.2641%  (      412)
00:12:36.114   1317.843 -  1324.967:   57.6435%  (      398)
00:12:36.114   1324.967 -  1332.090:   58.0268%  (      402)
00:12:36.114   1332.090 -  1339.214:   58.4272%  (      420)
00:12:36.114   1339.214 -  1346.337:   58.8228%  (      415)
00:12:36.114   1346.337 -  1353.461:   59.2499%  (      448)
00:12:36.114   1353.461 -  1360.584:   59.6494%  (      419)
00:12:36.114   1360.584 -  1367.708:   60.0031%  (      371)
00:12:36.114   1367.708 -  1374.831:   60.3882%  (      404)
00:12:36.114   1374.831 -  1381.955:   60.7724%  (      403)
00:12:36.114   1381.955 -  1389.078:   61.1814%  (      429)
00:12:36.114   1389.078 -  1396.202:   61.5694%  (      407)
00:12:36.114   1396.202 -  1403.325:   61.9612%  (      411)
00:12:36.114   1403.325 -  1410.449:   62.3674%  (      426)
00:12:36.114   1410.449 -  1417.572:   62.7335%  (      384)
00:12:36.114   1417.572 -  1424.696:   63.1300%  (      416)
00:12:36.114   1424.696 -  1431.819:   63.5552%  (      446)
00:12:36.114   1431.819 -  1438.943:   63.9814%  (      447)
00:12:36.114   1438.943 -  1446.066:   64.3904%  (      429)
00:12:36.114   1446.066 -  1453.190:   64.7774%  (      406)
00:12:36.114   1453.190 -  1460.313:   65.1473%  (      388)
00:12:36.114   1460.313 -  1467.437:   65.5554%  (      428)
00:12:36.114   1467.437 -  1474.560:   65.9567%  (      421)
00:12:36.114   1474.560 -  1481.683:   66.3476%  (      410)
00:12:36.114   1481.683 -  1488.807:   66.7547%  (      427)
00:12:36.114   1488.807 -  1495.930:   67.2056%  (      473)
00:12:36.114   1495.930 -  1503.054:   67.5936%  (      407)
00:12:36.114   1503.054 -  1510.177:   68.0045%  (      431)
00:12:36.114   1510.177 -  1517.301:   68.4088%  (      424)
00:12:36.114   1517.301 -  1524.424:   68.8311%  (      443)
00:12:36.114   1524.424 -  1531.548:   69.2191%  (      407)
00:12:36.114   1531.548 -  1538.671:   69.5747%  (      373)
00:12:36.114   1538.671 -  1545.795:   69.9837%  (      429)
00:12:36.114   1545.795 -  1552.918:   70.4041%  (      441)
00:12:36.114   1552.918 -  1560.042:   70.8103%  (      426)
00:12:36.114   1560.042 -  1567.165:   71.2154%  (      425)
00:12:36.114   1567.165 -  1574.289:   71.5920%  (      395)
00:12:36.114   1574.289 -  1581.412:   71.9695%  (      396)
00:12:36.114   1581.412 -  1588.536:   72.3595%  (      409)
00:12:36.114   1588.536 -  1595.659:   72.7341%  (      393)
00:12:36.114   1595.659 -  1602.783:   73.1097%  (      394)
00:12:36.114   1602.783 -  1609.906:   73.4958%  (      405)
00:12:36.114   1609.906 -  1617.030:   73.8762%  (      399)
00:12:36.114   1617.030 -  1624.153:   74.2681%  (      411)
00:12:36.114   1624.153 -  1631.277:   74.6246%  (      374)
00:12:36.114   1631.277 -  1638.400:   75.0145%  (      409)
00:12:36.114   1638.400 -  1645.523:   75.3768%  (      380)
00:12:36.114   1645.523 -  1652.647:   75.7696%  (      412)
00:12:36.114   1652.647 -  1659.770:   76.1357%  (      384)
00:12:36.114   1659.770 -  1666.894:   76.5075%  (      390)
00:12:36.114   1666.894 -  1674.017:   76.8783%  (      389)
00:12:36.114   1674.017 -  1681.141:   77.2959%  (      438)
00:12:36.114   1681.141 -  1688.264:   77.6591%  (      381)
00:12:36.114   1688.264 -  1695.388:   78.0119%  (      370)
00:12:36.114   1695.388 -  1702.511:   78.4008%  (      408)
00:12:36.114   1702.511 -  1709.635:   78.7612%  (      378)
00:12:36.114   1709.635 -  1716.758:   79.1387%  (      396)
00:12:36.114   1716.758 -  1723.882:   79.5001%  (      379)
00:12:36.114   1723.882 -  1731.005:   79.8814%  (      400)
00:12:36.114   1731.005 -  1738.129:   80.2284%  (      364)
00:12:36.114   1738.129 -  1745.252:   80.6279%  (      419)
00:12:36.114   1745.252 -  1752.376:   80.9768%  (      366)
00:12:36.114   1752.376 -  1759.499:   81.3238%  (      364)
00:12:36.114   1759.499 -  1766.623:   81.7014%  (      396)
00:12:36.114   1766.623 -  1773.746:   82.0865%  (      404)
00:12:36.114   1773.746 -  1780.870:   82.4335%  (      364)
00:12:36.114   1780.870 -  1787.993:   82.7863%  (      370)
00:12:36.114   1787.993 -  1795.117:   83.1381%  (      369)
00:12:36.114   1795.117 -  1802.240:   83.4956%  (      375)
00:12:36.114   1802.240 -  1809.363:   83.8311%  (      352)
00:12:36.114   1809.363 -  1816.487:   84.2192%  (      407)
00:12:36.114   1816.487 -  1823.610:   84.5576%  (      355)
00:12:36.114   1823.610 -  1837.857:   85.2040%  (      678)
00:12:36.114   1837.857 -  1852.104:   85.9075%  (      738)
00:12:36.114   1852.104 -  1866.351:   86.5949%  (      721)
00:12:36.114   1866.351 -  1880.598:   87.2327%  (      669)
00:12:36.114   1880.598 -  1894.845:   87.8257%  (      622)
00:12:36.114   1894.845 -  1909.092:   88.4806%  (      687)
00:12:36.114   1909.092 -  1923.339:   89.0126%  (      558)
00:12:36.114   1923.339 -  1937.586:   89.6304%  (      648)
00:12:36.114   1937.586 -  1951.833:   90.1643%  (      560)
00:12:36.114   1951.833 -  1966.080:   90.6686%  (      529)
00:12:36.114   1966.080 -  1980.327:   91.2301%  (      589)
00:12:36.114   1980.327 -  1994.574:   91.7316%  (      526)
00:12:36.114   1994.574 -  2008.821:   92.2102%  (      502)
00:12:36.114   2008.821 -  2023.068:   92.6535%  (      465)
00:12:36.114   2023.068 -  2037.315:   93.1015%  (      470)
00:12:36.114   2037.315 -  2051.562:   93.5239%  (      443)
00:12:36.114   2051.562 -  2065.809:   93.9538%  (      451)
00:12:36.114   2065.809 -  2080.056:   94.3171%  (      381)
00:12:36.114   2080.056 -  2094.303:   94.7241%  (      427)
00:12:36.114   2094.303 -  2108.550:   95.0893%  (      383)
00:12:36.114   2108.550 -  2122.797:   95.4449%  (      373)
00:12:36.114   2122.797 -  2137.043:   95.7938%  (      366)
00:12:36.114   2137.043 -  2151.290:   96.1303%  (      353)
00:12:36.114   2151.290 -  2165.537:   96.4306%  (      315)
00:12:36.114   2165.537 -  2179.784:   96.6966%  (      279)
00:12:36.114   2179.784 -  2194.031:   96.9750%  (      292)
00:12:36.114   2194.031 -  2208.278:   97.2019%  (      238)
00:12:36.114   2208.278 -  2222.525:   97.4479%  (      258)
00:12:36.114   2222.525 -  2236.772:   97.6595%  (      222)
00:12:36.114   2236.772 -  2251.019:   97.8569%  (      207)
00:12:36.114   2251.019 -  2265.266:   98.0666%  (      220)
00:12:36.114   2265.266 -  2279.513:   98.2249%  (      166)
00:12:36.114   2279.513 -  2293.760:   98.3774%  (      160)
00:12:36.114   2293.760 -  2308.007:   98.5233%  (      153)
00:12:36.114   2308.007 -  2322.254:   98.6739%  (      158)
00:12:36.114   2322.254 -  2336.501:   98.7978%  (      130)
00:12:36.114   2336.501 -  2350.748:   98.9189%  (      127)
00:12:36.114   2350.748 -  2364.995:   99.0171%  (      103)
00:12:36.114   2364.995 -  2379.242:   99.1019%  (       89)
00:12:36.115   2379.242 -  2393.489:   99.1773%  (       79)
00:12:36.115   2393.489 -  2407.736:   99.2440%  (       70)
00:12:36.115   2407.736 -  2421.983:   99.3069%  (       66)
00:12:36.115   2421.983 -  2436.230:   99.3632%  (       59)
00:12:36.115   2436.230 -  2450.477:   99.4156%  (       55)
00:12:36.115   2450.477 -  2464.723:   99.4709%  (       58)
00:12:36.115   2464.723 -  2478.970:   99.5214%  (       53)
00:12:36.115   2478.970 -  2493.217:   99.5700%  (       51)
00:12:36.115   2493.217 -  2507.464:   99.6244%  (       57)
00:12:36.115   2507.464 -  2521.711:   99.6654%  (       43)
00:12:36.115   2521.711 -  2535.958:   99.7035%  (       40)
00:12:36.115   2535.958 -  2550.205:   99.7407%  (       39)
00:12:36.115   2550.205 -  2564.452:   99.7664%  (       27)
00:12:36.115   2564.452 -  2578.699:   99.7864%  (       21)
00:12:36.115   2578.699 -  2592.946:   99.8074%  (       22)
00:12:36.115   2592.946 -  2607.193:   99.8322%  (       26)
00:12:36.115   2607.193 -  2621.440:   99.8503%  (       19)
00:12:36.115   2621.440 -  2635.687:   99.8675%  (       18)
00:12:36.115   2635.687 -  2649.934:   99.8818%  (       15)
00:12:36.115   2649.934 -  2664.181:   99.8942%  (       13)
00:12:36.115   2664.181 -  2678.428:   99.8999%  (        6)
00:12:36.115   2678.428 -  2692.675:   99.9180%  (       19)
00:12:36.115   2692.675 -  2706.922:   99.9266%  (        9)
00:12:36.115   2706.922 -  2721.169:   99.9352%  (        9)
00:12:36.115   2721.169 -  2735.416:   99.9399%  (        5)
00:12:36.115   2735.416 -  2749.663:   99.9466%  (        7)
00:12:36.115   2749.663 -  2763.910:   99.9523%  (        6)
00:12:36.115   2763.910 -  2778.157:   99.9571%  (        5)
00:12:36.115   2778.157 -  2792.403:   99.9628%  (        6)
00:12:36.115   2792.403 -  2806.650:   99.9657%  (        3)
00:12:36.115   2820.897 -  2835.144:   99.9704%  (        5)
00:12:36.115   2835.144 -  2849.391:   99.9724%  (        2)
00:12:36.115   2849.391 -  2863.638:   99.9743%  (        2)
00:12:36.115   2863.638 -  2877.885:   99.9762%  (        2)
00:12:36.115   2877.885 -  2892.132:   99.9781%  (        2)
00:12:36.115   2892.132 -  2906.379:   99.9790%  (        1)
00:12:36.115   2906.379 -  2920.626:   99.9828%  (        4)
00:12:36.115   2920.626 -  2934.873:   99.9847%  (        2)
00:12:36.115   2934.873 -  2949.120:   99.9857%  (        1)
00:12:36.115   2963.367 -  2977.614:   99.9876%  (        2)
00:12:36.115   2977.614 -  2991.861:   99.9886%  (        1)
00:12:36.115   2991.861 -  3006.108:   99.9924%  (        4)
00:12:36.115   3006.108 -  3020.355:   99.9933%  (        1)
00:12:36.115   3020.355 -  3034.602:   99.9943%  (        1)
00:12:36.115   3048.849 -  3063.096:   99.9952%  (        1)
00:12:36.115   3077.343 -  3091.590:   99.9962%  (        1)
00:12:36.115   3091.590 -  3105.837:   99.9971%  (        1)
00:12:36.115   3177.071 -  3191.318:   99.9990%  (        2)
00:12:36.115   3219.812 -  3234.059:  100.0000%  (        1)
00:12:36.115  
00:12:36.115   00:43:25	-- nvme/nvme.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:12:37.493  Initializing NVMe Controllers
00:12:37.493  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:37.493  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:12:37.493  Initialization complete. Launching workers.
00:12:37.493  ========================================================
00:12:37.493                                                                             Latency(us)
00:12:37.493  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:37.494  PCIE (0000:5e:00.0) NSID 1 from core  0:  128457.00    1505.36     995.46     462.26    1901.51
00:12:37.494  ========================================================
00:12:37.494  Total                                  :  128457.00    1505.36     995.46     462.26    1901.51
00:12:37.494  
00:12:37.494  Summary latency data for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:12:37.494  =================================================================================
00:12:37.494    1.00000% :   926.052us
00:12:37.494   10.00000% :   954.546us
00:12:37.494   25.00000% :   975.917us
00:12:37.494   50.00000% :   997.287us
00:12:37.494   75.00000% :  1018.657us
00:12:37.494   90.00000% :  1040.028us
00:12:37.494   95.00000% :  1054.275us
00:12:37.494   98.00000% :  1061.398us
00:12:37.494   99.00000% :  1068.522us
00:12:37.494   99.50000% :  1075.645us
00:12:37.494   99.90000% :  1410.449us
00:12:37.494   99.99000% :  1894.845us
00:12:37.494   99.99900% :  1909.092us
00:12:37.494   99.99990% :  1909.092us
00:12:37.494   99.99999% :  1909.092us
00:12:37.494  
00:12:37.494  Latency histogram for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:12:37.494  ==============================================================================
00:12:37.494         Range in us     Cumulative    IO count
00:12:37.494    459.464 -   463.026:    0.0023%  (        3)
00:12:37.494    463.026 -   466.588:    0.0047%  (        3)
00:12:37.494    509.329 -   512.890:    0.0109%  (        8)
00:12:37.494    512.890 -   516.452:    0.0124%  (        2)
00:12:37.494    541.384 -   544.946:    0.0156%  (        4)
00:12:37.494    544.946 -   548.508:    0.0202%  (        6)
00:12:37.494    548.508 -   552.070:    0.0210%  (        1)
00:12:37.494    598.372 -   601.934:    0.0272%  (        8)
00:12:37.494    601.934 -   605.496:    0.0296%  (        3)
00:12:37.494    630.428 -   633.990:    0.0358%  (        8)
00:12:37.494    633.990 -   637.551:    0.0373%  (        2)
00:12:37.494    680.292 -   683.854:    0.0381%  (        1)
00:12:37.494    683.854 -   687.416:    0.0397%  (        2)
00:12:37.494    687.416 -   690.977:    0.0436%  (        5)
00:12:37.494    690.977 -   694.539:    0.0459%  (        3)
00:12:37.494    723.033 -   726.595:    0.0467%  (        1)
00:12:37.494    726.595 -   730.157:    0.0490%  (        3)
00:12:37.494    730.157 -   733.718:    0.0529%  (        5)
00:12:37.494    733.718 -   737.280:    0.0545%  (        2)
00:12:37.494    776.459 -   780.021:    0.0560%  (        2)
00:12:37.494    780.021 -   783.583:    0.0607%  (        6)
00:12:37.494    783.583 -   787.144:    0.0622%  (        2)
00:12:37.494    812.077 -   815.638:    0.0630%  (        1)
00:12:37.494    815.638 -   819.200:    0.0661%  (        4)
00:12:37.494    819.200 -   822.762:    0.0700%  (        5)
00:12:37.494    822.762 -   826.323:    0.0708%  (        1)
00:12:37.494    844.132 -   847.694:    0.0716%  (        1)
00:12:37.494    847.694 -   851.256:    0.0723%  (        1)
00:12:37.494    854.817 -   858.379:    0.0755%  (        4)
00:12:37.494    858.379 -   861.941:    0.0762%  (        1)
00:12:37.494    861.941 -   865.503:    0.0786%  (        3)
00:12:37.494    869.064 -   872.626:    0.0809%  (        3)
00:12:37.494    872.626 -   876.188:    0.0832%  (        3)
00:12:37.494    876.188 -   879.750:    0.0879%  (        6)
00:12:37.494    879.750 -   883.311:    0.0926%  (        6)
00:12:37.494    883.311 -   886.873:    0.0949%  (        3)
00:12:37.494    890.435 -   893.997:    0.1004%  (        7)
00:12:37.494    893.997 -   897.558:    0.1027%  (        3)
00:12:37.494    897.558 -   901.120:    0.1050%  (        3)
00:12:37.494    901.120 -   904.682:    0.1074%  (        3)
00:12:37.494    904.682 -   908.243:    0.1260%  (       24)
00:12:37.494    908.243 -   911.805:    0.2271%  (      130)
00:12:37.494    911.805 -   918.929:    0.6978%  (      605)
00:12:37.494    918.929 -   926.052:    1.7931%  (     1408)
00:12:37.494    926.052 -   933.176:    3.6834%  (     2430)
00:12:37.494    933.176 -   940.299:    6.4341%  (     3536)
00:12:37.494    940.299 -   947.423:    9.8748%  (     4423)
00:12:37.494    947.423 -   954.546:   13.8794%  (     5148)
00:12:37.494    954.546 -   961.670:   18.4597%  (     5888)
00:12:37.494    961.670 -   968.793:   23.7581%  (     6811)
00:12:37.494    968.793 -   975.917:   29.7970%  (     7763)
00:12:37.494    975.917 -   983.040:   35.4111%  (     7217)
00:12:37.494    983.040 -   990.163:   42.7592%  (     9446)
00:12:37.494    990.163 -   997.287:   50.5998%  (    10079)
00:12:37.494    997.287 -  1004.410:   60.6278%  (    12891)
00:12:37.494   1004.410 -  1011.534:   68.3407%  (     9915)
00:12:37.494   1011.534 -  1018.657:   75.3722%  (     9039)
00:12:37.494   1018.657 -  1025.781:   81.1645%  (     7446)
00:12:37.494   1025.781 -  1032.904:   86.4465%  (     6790)
00:12:37.494   1032.904 -  1040.028:   90.5399%  (     5262)
00:12:37.494   1040.028 -  1047.151:   94.0467%  (     4508)
00:12:37.494   1047.151 -  1054.275:   96.5554%  (     3225)
00:12:37.494   1054.275 -  1061.398:   98.2559%  (     2186)
00:12:37.494   1061.398 -  1068.522:   99.2104%  (     1227)
00:12:37.494   1068.522 -  1075.645:   99.6461%  (      560)
00:12:37.494   1075.645 -  1082.769:   99.7028%  (       73)
00:12:37.494   1082.769 -  1089.892:   99.7200%  (       22)
00:12:37.494   1089.892 -  1097.016:   99.7270%  (        9)
00:12:37.494   1097.016 -  1104.139:   99.7340%  (        9)
00:12:37.494   1104.139 -  1111.263:   99.7402%  (        8)
00:12:37.494   1111.263 -  1118.386:   99.7433%  (        4)
00:12:37.494   1118.386 -  1125.510:   99.7480%  (        6)
00:12:37.494   1125.510 -  1132.633:   99.7542%  (        8)
00:12:37.494   1132.633 -  1139.757:   99.7604%  (        8)
00:12:37.494   1139.757 -  1146.880:   99.7635%  (        4)
00:12:37.494   1146.880 -  1154.003:   99.7651%  (        2)
00:12:37.494   1161.127 -  1168.250:   99.7666%  (        2)
00:12:37.494   1168.250 -  1175.374:   99.7682%  (        2)
00:12:37.494   1175.374 -  1182.497:   99.7713%  (        4)
00:12:37.494   1196.744 -  1203.868:   99.7721%  (        1)
00:12:37.494   1203.868 -  1210.991:   99.7729%  (        1)
00:12:37.494   1210.991 -  1218.115:   99.7744%  (        2)
00:12:37.494   1218.115 -  1225.238:   99.7783%  (        5)
00:12:37.494   1225.238 -  1232.362:   99.7814%  (        4)
00:12:37.494   1232.362 -  1239.485:   99.7845%  (        4)
00:12:37.494   1239.485 -  1246.609:   99.7900%  (        7)
00:12:37.494   1246.609 -  1253.732:   99.7985%  (       11)
00:12:37.494   1253.732 -  1260.856:   99.8063%  (       10)
00:12:37.494   1260.856 -  1267.979:   99.8149%  (       11)
00:12:37.494   1267.979 -  1275.103:   99.8195%  (        6)
00:12:37.494   1275.103 -  1282.226:   99.8382%  (       24)
00:12:37.494   1282.226 -  1289.350:   99.8444%  (        8)
00:12:37.494   1289.350 -  1296.473:   99.8530%  (       11)
00:12:37.494   1296.473 -  1303.597:   99.8631%  (       13)
00:12:37.494   1303.597 -  1310.720:   99.8701%  (        9)
00:12:37.494   1310.720 -  1317.843:   99.8763%  (        8)
00:12:37.494   1317.843 -  1324.967:   99.8794%  (        4)
00:12:37.494   1324.967 -  1332.090:   99.8818%  (        3)
00:12:37.494   1332.090 -  1339.214:   99.8833%  (        2)
00:12:37.494   1339.214 -  1346.337:   99.8856%  (        3)
00:12:37.494   1346.337 -  1353.461:   99.8895%  (        5)
00:12:37.494   1353.461 -  1360.584:   99.8911%  (        2)
00:12:37.494   1360.584 -  1367.708:   99.8926%  (        2)
00:12:37.494   1367.708 -  1374.831:   99.8950%  (        3)
00:12:37.494   1374.831 -  1381.955:   99.8958%  (        1)
00:12:37.494   1381.955 -  1389.078:   99.8973%  (        2)
00:12:37.494   1389.078 -  1396.202:   99.8989%  (        2)
00:12:37.494   1396.202 -  1403.325:   99.8996%  (        1)
00:12:37.494   1403.325 -  1410.449:   99.9012%  (        2)
00:12:37.494   1759.499 -  1766.623:   99.9020%  (        1)
00:12:37.494   1766.623 -  1773.746:   99.9035%  (        2)
00:12:37.494   1773.746 -  1780.870:   99.9059%  (        3)
00:12:37.494   1780.870 -  1787.993:   99.9082%  (        3)
00:12:37.494   1787.993 -  1795.117:   99.9090%  (        1)
00:12:37.494   1795.117 -  1802.240:   99.9121%  (        4)
00:12:37.494   1802.240 -  1809.363:   99.9160%  (        5)
00:12:37.494   1809.363 -  1816.487:   99.9199%  (        5)
00:12:37.494   1816.487 -  1823.610:   99.9284%  (       11)
00:12:37.494   1823.610 -  1837.857:   99.9440%  (       20)
00:12:37.494   1837.857 -  1852.104:   99.9572%  (       17)
00:12:37.494   1852.104 -  1866.351:   99.9720%  (       19)
00:12:37.494   1866.351 -  1880.598:   99.9883%  (       21)
00:12:37.494   1880.598 -  1894.845:   99.9969%  (       11)
00:12:37.494   1894.845 -  1909.092:  100.0000%  (        4)
00:12:37.494  
00:12:37.494   00:43:26	-- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:12:37.494  
00:12:37.494  real	0m2.658s
00:12:37.494  user	0m2.178s
00:12:37.494  sys	0m0.364s
00:12:37.494   00:43:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:37.494   00:43:26	-- common/autotest_common.sh@10 -- # set +x
00:12:37.494  ************************************
00:12:37.494  END TEST nvme_perf
00:12:37.494  ************************************
00:12:37.494   00:43:26	-- nvme/nvme.sh@87 -- # run_test nvme_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0
00:12:37.494   00:43:26	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:12:37.494   00:43:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:37.494   00:43:26	-- common/autotest_common.sh@10 -- # set +x
00:12:37.494  ************************************
00:12:37.494  START TEST nvme_hello_world
00:12:37.494  ************************************
00:12:37.494   00:43:26	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0
00:12:37.494  Initializing NVMe Controllers
00:12:37.494  Attached to 0000:5e:00.0
00:12:37.494    Namespace ID: 1 size: 4000GB
00:12:37.494  Initialization complete.
00:12:37.494  INFO: using host memory buffer for IO
00:12:37.494  Hello world!
00:12:37.753  
00:12:37.753  real	0m0.303s
00:12:37.754  user	0m0.081s
00:12:37.754  sys	0m0.185s
00:12:37.754   00:43:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:37.754   00:43:26	-- common/autotest_common.sh@10 -- # set +x
00:12:37.754  ************************************
00:12:37.754  END TEST nvme_hello_world
00:12:37.754  ************************************
00:12:37.754   00:43:26	-- nvme/nvme.sh@88 -- # run_test nvme_sgl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl
00:12:37.754   00:43:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:37.754   00:43:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:37.754   00:43:26	-- common/autotest_common.sh@10 -- # set +x
00:12:37.754  ************************************
00:12:37.754  START TEST nvme_sgl
00:12:37.754  ************************************
00:12:37.754   00:43:26	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl
00:12:38.013  NVMe Readv/Writev Request test
00:12:38.013  Attached to 0000:5e:00.0
00:12:38.013  0000:5e:00.0: build_io_request_0 test passed
00:12:38.013  0000:5e:00.0: build_io_request_1 test passed
00:12:38.013  0000:5e:00.0: build_io_request_2 test passed
00:12:38.013  0000:5e:00.0: build_io_request_3 test passed
00:12:38.013  0000:5e:00.0: build_io_request_4 test passed
00:12:38.013  0000:5e:00.0: build_io_request_5 test passed
00:12:38.013  0000:5e:00.0: build_io_request_6 test passed
00:12:38.013  0000:5e:00.0: build_io_request_7 test passed
00:12:38.013  0000:5e:00.0: build_io_request_8 test passed
00:12:38.013  0000:5e:00.0: build_io_request_9 test passed
00:12:38.013  0000:5e:00.0: build_io_request_10 test passed
00:12:38.013  0000:5e:00.0: build_io_request_11 test passed
00:12:38.013  Cleaning up...
00:12:38.013  
00:12:38.013  real	0m0.394s
00:12:38.013  user	0m0.171s
00:12:38.013  sys	0m0.179s
00:12:38.013   00:43:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:38.013   00:43:27	-- common/autotest_common.sh@10 -- # set +x
00:12:38.013  ************************************
00:12:38.013  END TEST nvme_sgl
00:12:38.013  ************************************
00:12:38.013   00:43:27	-- nvme/nvme.sh@89 -- # run_test nvme_e2edp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp
00:12:38.013   00:43:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:38.013   00:43:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:38.013   00:43:27	-- common/autotest_common.sh@10 -- # set +x
00:12:38.013  ************************************
00:12:38.013  START TEST nvme_e2edp
00:12:38.013  ************************************
00:12:38.013   00:43:27	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp
00:12:38.582  NVMe Write/Read with End-to-End data protection test
00:12:38.582  Attached to 0000:5e:00.0
00:12:38.582  Cleaning up...
00:12:38.582  
00:12:38.582  real	0m0.288s
00:12:38.582  user	0m0.080s
00:12:38.582  sys	0m0.165s
00:12:38.582   00:43:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:38.582   00:43:27	-- common/autotest_common.sh@10 -- # set +x
00:12:38.582  ************************************
00:12:38.582  END TEST nvme_e2edp
00:12:38.582  ************************************
00:12:38.582   00:43:27	-- nvme/nvme.sh@90 -- # run_test nvme_reserve /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve
00:12:38.582   00:43:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:38.582   00:43:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:38.582   00:43:27	-- common/autotest_common.sh@10 -- # set +x
00:12:38.582  ************************************
00:12:38.582  START TEST nvme_reserve
00:12:38.582  ************************************
00:12:38.582   00:43:27	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve
00:12:38.841  =====================================================
00:12:38.841  NVMe Controller at PCI bus 94, device 0, function 0
00:12:38.841  =====================================================
00:12:38.841  Reservations:                Not Supported
00:12:38.841  Reservation test passed
00:12:38.841  
00:12:38.841  real	0m0.304s
00:12:38.841  user	0m0.073s
00:12:38.841  sys	0m0.185s
00:12:38.841   00:43:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:38.841   00:43:27	-- common/autotest_common.sh@10 -- # set +x
00:12:38.841  ************************************
00:12:38.841  END TEST nvme_reserve
00:12:38.841  ************************************
00:12:38.841   00:43:27	-- nvme/nvme.sh@91 -- # run_test nvme_err_injection /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection
00:12:38.841   00:43:27	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:38.841   00:43:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:38.841   00:43:27	-- common/autotest_common.sh@10 -- # set +x
00:12:38.841  ************************************
00:12:38.841  START TEST nvme_err_injection
00:12:38.841  ************************************
00:12:38.841   00:43:27	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection
00:12:39.101  NVMe Error Injection test
00:12:39.101  Attached to 0000:5e:00.0
00:12:39.101  0000:5e:00.0: get features failed as expected
00:12:39.101  0000:5e:00.0: get features successfully as expected
00:12:39.101  0000:5e:00.0: read failed as expected
00:12:39.101  0000:5e:00.0: read successfully as expected
00:12:39.101  Cleaning up...
00:12:39.101  
00:12:39.101  real	0m0.335s
00:12:39.101  user	0m0.085s
00:12:39.101  sys	0m0.183s
00:12:39.101   00:43:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:39.101   00:43:28	-- common/autotest_common.sh@10 -- # set +x
00:12:39.101  ************************************
00:12:39.101  END TEST nvme_err_injection
00:12:39.101  ************************************
00:12:39.101   00:43:28	-- nvme/nvme.sh@92 -- # run_test nvme_overhead /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:12:39.101   00:43:28	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:12:39.101   00:43:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:39.101   00:43:28	-- common/autotest_common.sh@10 -- # set +x
00:12:39.101  ************************************
00:12:39.101  START TEST nvme_overhead
00:12:39.101  ************************************
00:12:39.101   00:43:28	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:12:40.480  Initializing NVMe Controllers
00:12:40.480  Attached to 0000:5e:00.0
00:12:40.480  Initialization complete. Launching workers.
00:12:40.480  submit (in ns)   avg, min, max =   4702.4,   4396.5,  68042.6
00:12:40.480  complete (in ns) avg, min, max =   2751.9,   2656.5, 882664.3
00:12:40.480  
00:12:40.480  Submit histogram
00:12:40.480  ================
00:12:40.480         Range in us     Cumulative     Count
00:12:40.480      4.397 -     4.424:    0.2756%  (      243)
00:12:40.480      4.424 -     4.452:    2.3474%  (     1827)
00:12:40.480      4.452 -     4.480:    6.2735%  (     3462)
00:12:40.480      4.480 -     4.508:   11.8302%  (     4900)
00:12:40.480      4.508 -     4.536:   18.8340%  (     6176)
00:12:40.480      4.536 -     4.563:   26.2472%  (     6537)
00:12:40.480      4.563 -     4.591:   32.9198%  (     5884)
00:12:40.480      4.591 -     4.619:   40.1266%  (     6355)
00:12:40.480      4.619 -     4.647:   46.3070%  (     5450)
00:12:40.480      4.647 -     4.675:   52.6145%  (     5562)
00:12:40.480      4.675 -     4.703:   58.1497%  (     4881)
00:12:40.480      4.703 -     4.730:   62.9376%  (     4222)
00:12:40.480      4.730 -     4.758:   68.7608%  (     5135)
00:12:40.480      4.758 -     4.786:   74.0851%  (     4695)
00:12:40.480      4.786 -     4.814:   78.4761%  (     3872)
00:12:40.480      4.814 -     4.842:   81.9836%  (     3093)
00:12:40.480      4.842 -     4.870:   85.4141%  (     3025)
00:12:40.480      4.870 -     4.897:   88.9852%  (     3149)
00:12:40.480      4.897 -     4.925:   92.1922%  (     2828)
00:12:40.480      4.925 -     4.953:   94.5918%  (     2116)
00:12:40.480      4.953 -     4.981:   96.3711%  (     1569)
00:12:40.480      4.981 -     5.009:   97.5528%  (     1042)
00:12:40.480      5.009 -     5.037:   98.4384%  (      781)
00:12:40.480      5.037 -     5.064:   98.9828%  (      480)
00:12:40.480      5.064 -     5.092:   99.3343%  (      310)
00:12:40.480      5.092 -     5.120:   99.4942%  (      141)
00:12:40.480      5.120 -     5.148:   99.5419%  (       42)
00:12:40.480      5.148 -     5.176:   99.5543%  (       11)
00:12:40.480      5.176 -     5.203:   99.5589%  (        4)
00:12:40.480      5.231 -     5.259:   99.5600%  (        1)
00:12:40.480      5.426 -     5.454:   99.5611%  (        1)
00:12:40.480      6.289 -     6.317:   99.5623%  (        1)
00:12:40.480      7.179 -     7.235:   99.5634%  (        1)
00:12:40.480      7.402 -     7.457:   99.5645%  (        1)
00:12:40.480      7.513 -     7.569:   99.5691%  (        4)
00:12:40.480      7.569 -     7.624:   99.5702%  (        1)
00:12:40.480      7.624 -     7.680:   99.5736%  (        3)
00:12:40.480      7.680 -     7.736:   99.5759%  (        2)
00:12:40.480      7.736 -     7.791:   99.5838%  (        7)
00:12:40.480      7.791 -     7.847:   99.5872%  (        3)
00:12:40.480      7.847 -     7.903:   99.5952%  (        7)
00:12:40.480      7.903 -     7.958:   99.6054%  (        9)
00:12:40.480      7.958 -     8.014:   99.6110%  (        5)
00:12:40.480      8.014 -     8.070:   99.6167%  (        5)
00:12:40.480      8.070 -     8.125:   99.6314%  (       13)
00:12:40.480      8.125 -     8.181:   99.6428%  (       10)
00:12:40.480      8.181 -     8.237:   99.6575%  (       13)
00:12:40.480      8.237 -     8.292:   99.6689%  (       10)
00:12:40.480      8.292 -     8.348:   99.6802%  (       10)
00:12:40.480      8.348 -     8.403:   99.6938%  (       12)
00:12:40.480      8.403 -     8.459:   99.7029%  (        8)
00:12:40.480      8.459 -     8.515:   99.7176%  (       13)
00:12:40.480      8.515 -     8.570:   99.7278%  (        9)
00:12:40.480      8.570 -     8.626:   99.7403%  (       11)
00:12:40.480      8.626 -     8.682:   99.7505%  (        9)
00:12:40.480      8.682 -     8.737:   99.7641%  (       12)
00:12:40.480      8.737 -     8.793:   99.7743%  (        9)
00:12:40.480      8.793 -     8.849:   99.7902%  (       14)
00:12:40.480      8.849 -     8.904:   99.8049%  (       13)
00:12:40.480      8.904 -     8.960:   99.8152%  (        9)
00:12:40.480      8.960 -     9.016:   99.8197%  (        4)
00:12:40.480      9.016 -     9.071:   99.8299%  (        9)
00:12:40.480      9.071 -     9.127:   99.8367%  (        6)
00:12:40.480      9.127 -     9.183:   99.8458%  (        8)
00:12:40.480      9.183 -     9.238:   99.8560%  (        9)
00:12:40.480      9.238 -     9.294:   99.8719%  (       14)
00:12:40.480      9.294 -     9.350:   99.8832%  (       10)
00:12:40.480      9.350 -     9.405:   99.8889%  (        5)
00:12:40.480      9.405 -     9.461:   99.8979%  (        8)
00:12:40.480      9.461 -     9.517:   99.9059%  (        7)
00:12:40.480      9.517 -     9.572:   99.9138%  (        7)
00:12:40.480      9.572 -     9.628:   99.9263%  (       11)
00:12:40.480      9.628 -     9.683:   99.9331%  (        6)
00:12:40.480      9.683 -     9.739:   99.9376%  (        4)
00:12:40.480      9.739 -     9.795:   99.9399%  (        2)
00:12:40.480      9.795 -     9.850:   99.9444%  (        4)
00:12:40.480      9.850 -     9.906:   99.9467%  (        2)
00:12:40.480      9.906 -     9.962:   99.9490%  (        2)
00:12:40.480      9.962 -    10.017:   99.9524%  (        3)
00:12:40.480     10.017 -    10.073:   99.9535%  (        1)
00:12:40.480     10.073 -    10.129:   99.9580%  (        4)
00:12:40.480     10.129 -    10.184:   99.9603%  (        2)
00:12:40.480     10.184 -    10.240:   99.9626%  (        2)
00:12:40.480     10.240 -    10.296:   99.9660%  (        3)
00:12:40.480     10.296 -    10.351:   99.9694%  (        3)
00:12:40.480     10.351 -    10.407:   99.9716%  (        2)
00:12:40.480     10.463 -    10.518:   99.9728%  (        1)
00:12:40.480     10.518 -    10.574:   99.9762%  (        3)
00:12:40.480     10.630 -    10.685:   99.9773%  (        1)
00:12:40.480     10.685 -    10.741:   99.9785%  (        1)
00:12:40.480     10.963 -    11.019:   99.9796%  (        1)
00:12:40.480     11.130 -    11.186:   99.9807%  (        1)
00:12:40.480     11.186 -    11.242:   99.9819%  (        1)
00:12:40.480     11.297 -    11.353:   99.9830%  (        1)
00:12:40.480     12.132 -    12.188:   99.9841%  (        1)
00:12:40.480     12.856 -    12.911:   99.9853%  (        1)
00:12:40.480     12.911 -    12.967:   99.9864%  (        1)
00:12:40.480     12.967 -    13.023:   99.9887%  (        2)
00:12:40.480     13.134 -    13.190:   99.9909%  (        2)
00:12:40.480     13.301 -    13.357:   99.9932%  (        2)
00:12:40.480     14.080 -    14.136:   99.9943%  (        1)
00:12:40.480     14.581 -    14.692:   99.9955%  (        1)
00:12:40.480     16.807 -    16.918:   99.9977%  (        2)
00:12:40.480     28.494 -    28.717:   99.9989%  (        1)
00:12:40.480     67.673 -    68.118:  100.0000%  (        1)
00:12:40.480  
00:12:40.480  Complete histogram
00:12:40.480  ==================
00:12:40.480         Range in us     Cumulative     Count
00:12:40.480      2.643 -     2.657:    0.0011%  (        1)
00:12:40.480      2.657 -     2.671:    0.0057%  (        4)
00:12:40.480      2.671 -     2.685:    0.0726%  (       59)
00:12:40.480      2.685 -     2.699:    3.5336%  (     3052)
00:12:40.480      2.699 -     2.713:   32.6975%  (    25717)
00:12:40.480      2.713 -     2.727:   78.5952%  (    40473)
00:12:40.480      2.727 -     2.741:   95.8835%  (    15245)
00:12:40.480      2.741 -     2.755:   98.4804%  (     2290)
00:12:40.480      2.755 -     2.769:   99.1257%  (      569)
00:12:40.480      2.769 -     2.783:   99.2583%  (      117)
00:12:40.480      2.783 -     2.797:   99.3184%  (       53)
00:12:40.480      2.797 -     2.810:   99.3570%  (       34)
00:12:40.480      2.810 -     2.824:   99.3978%  (       36)
00:12:40.480      2.824 -     2.838:   99.4557%  (       51)
00:12:40.480      2.838 -     2.852:   99.4715%  (       14)
00:12:40.480      2.852 -     2.866:   99.4738%  (        2)
00:12:40.480      2.922 -     2.936:   99.4772%  (        3)
00:12:40.480      2.963 -     2.977:   99.4783%  (        1)
00:12:40.480      3.130 -     3.144:   99.4795%  (        1)
00:12:40.480      5.537 -     5.565:   99.4829%  (        3)
00:12:40.480      5.565 -     5.593:   99.4840%  (        1)
00:12:40.480      5.621 -     5.649:   99.4897%  (        5)
00:12:40.480      5.649 -     5.677:   99.4908%  (        1)
00:12:40.480      5.677 -     5.704:   99.4931%  (        2)
00:12:40.480      5.704 -     5.732:   99.4988%  (        5)
00:12:40.480      5.732 -     5.760:   99.5033%  (        4)
00:12:40.480      5.760 -     5.788:   99.5067%  (        3)
00:12:40.480      5.788 -     5.816:   99.5101%  (        3)
00:12:40.480      5.816 -     5.843:   99.5124%  (        2)
00:12:40.480      5.871 -     5.899:   99.5169%  (        4)
00:12:40.480      5.899 -     5.927:   99.5248%  (        7)
00:12:40.480      5.927 -     5.955:   99.5339%  (        8)
00:12:40.480      5.955 -     5.983:   99.5441%  (        9)
00:12:40.480      5.983 -     6.010:   99.5543%  (        9)
00:12:40.480      6.010 -     6.038:   99.5623%  (        7)
00:12:40.480      6.038 -     6.066:   99.5736%  (       10)
00:12:40.480      6.066 -     6.094:   99.5759%  (        2)
00:12:40.480      6.094 -     6.122:   99.5861%  (        9)
00:12:40.480      6.122 -     6.150:   99.5940%  (        7)
00:12:40.480      6.150 -     6.177:   99.6031%  (        8)
00:12:40.480      6.177 -     6.205:   99.6122%  (        8)
00:12:40.480      6.205 -     6.233:   99.6212%  (        8)
00:12:40.480      6.233 -     6.261:   99.6292%  (        7)
00:12:40.480      6.261 -     6.289:   99.6405%  (       10)
00:12:40.480      6.289 -     6.317:   99.6564%  (       14)
00:12:40.480      6.317 -     6.344:   99.6609%  (        4)
00:12:40.480      6.344 -     6.372:   99.6791%  (       16)
00:12:40.480      6.372 -     6.400:   99.6893%  (        9)
00:12:40.480      6.400 -     6.428:   99.7006%  (       10)
00:12:40.480      6.428 -     6.456:   99.7074%  (        6)
00:12:40.480      6.456 -     6.483:   99.7142%  (        6)
00:12:40.480      6.483 -     6.511:   99.7256%  (       10)
00:12:40.480      6.511 -     6.539:   99.7301%  (        4)
00:12:40.480      6.539 -     6.567:   99.7346%  (        4)
00:12:40.480      6.567 -     6.595:   99.7392%  (        4)
00:12:40.480      6.595 -     6.623:   99.7437%  (        4)
00:12:40.481      6.623 -     6.650:   99.7505%  (        6)
00:12:40.481      6.650 -     6.678:   99.7641%  (       12)
00:12:40.481      6.678 -     6.706:   99.7732%  (        8)
00:12:40.481      6.706 -     6.734:   99.7811%  (        7)
00:12:40.481      6.734 -     6.762:   99.7845%  (        3)
00:12:40.481      6.762 -     6.790:   99.7879%  (        3)
00:12:40.481      6.790 -     6.817:   99.7913%  (        3)
00:12:40.481      6.817 -     6.845:   99.7947%  (        3)
00:12:40.481      6.845 -     6.873:   99.7970%  (        2)
00:12:40.481      6.873 -     6.901:   99.8049%  (        7)
00:12:40.481      6.901 -     6.929:   99.8106%  (        5)
00:12:40.481      6.929 -     6.957:   99.8186%  (        7)
00:12:40.481      6.957 -     6.984:   99.8242%  (        5)
00:12:40.481      6.984 -     7.012:   99.8299%  (        5)
00:12:40.481      7.012 -     7.040:   99.8390%  (        8)
00:12:40.481      7.040 -     7.068:   99.8514%  (       11)
00:12:40.481      7.068 -     7.096:   99.8605%  (        8)
00:12:40.481      7.096 -     7.123:   99.8651%  (        4)
00:12:40.481      7.123 -     7.179:   99.8787%  (       12)
00:12:40.481      7.179 -     7.235:   99.8832%  (        4)
00:12:40.481      7.235 -     7.290:   99.8900%  (        6)
00:12:40.481      7.290 -     7.346:   99.8979%  (        7)
00:12:40.481      7.346 -     7.402:   99.9081%  (        9)
00:12:40.481      7.402 -     7.457:   99.9115%  (        3)
00:12:40.481      7.457 -     7.513:   99.9229%  (       10)
00:12:40.481      7.513 -     7.569:   99.9297%  (        6)
00:12:40.481      7.569 -     7.624:   99.9365%  (        6)
00:12:40.481      7.624 -     7.680:   99.9410%  (        4)
00:12:40.481      7.680 -     7.736:   99.9467%  (        5)
00:12:40.481      7.736 -     7.791:   99.9512%  (        4)
00:12:40.481      7.791 -     7.847:   99.9524%  (        1)
00:12:40.481      7.847 -     7.903:   99.9569%  (        4)
00:12:40.481      7.903 -     7.958:   99.9626%  (        5)
00:12:40.481      7.958 -     8.014:   99.9671%  (        4)
00:12:40.481      8.014 -     8.070:   99.9682%  (        1)
00:12:40.481      8.070 -     8.125:   99.9694%  (        1)
00:12:40.481      8.125 -     8.181:   99.9728%  (        3)
00:12:40.481      8.181 -     8.237:   99.9751%  (        2)
00:12:40.481      8.237 -     8.292:   99.9785%  (        3)
00:12:40.481      8.292 -     8.348:   99.9807%  (        2)
00:12:40.481      8.459 -     8.515:   99.9819%  (        1)
00:12:40.481      8.570 -     8.626:   99.9841%  (        2)
00:12:40.481      8.626 -     8.682:   99.9853%  (        1)
00:12:40.481      8.737 -     8.793:   99.9864%  (        1)
00:12:40.481      8.849 -     8.904:   99.9875%  (        1)
00:12:40.481      9.127 -     9.183:   99.9887%  (        1)
00:12:40.481      9.238 -     9.294:   99.9898%  (        1)
00:12:40.481     10.073 -    10.129:   99.9909%  (        1)
00:12:40.481     10.129 -    10.184:   99.9921%  (        1)
00:12:40.481     10.574 -    10.630:   99.9932%  (        1)
00:12:40.481     10.908 -    10.963:   99.9943%  (        1)
00:12:40.481     11.130 -    11.186:   99.9955%  (        1)
00:12:40.481     12.077 -    12.132:   99.9966%  (        1)
00:12:40.481     46.748 -    46.970:   99.9977%  (        1)
00:12:40.481    207.471 -   208.362:   99.9989%  (        1)
00:12:40.481    879.750 -   883.311:  100.0000%  (        1)
00:12:40.481  
00:12:40.481  
00:12:40.481  real	0m1.324s
00:12:40.481  user	0m1.080s
00:12:40.481  sys	0m0.183s
00:12:40.481   00:43:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:40.481   00:43:29	-- common/autotest_common.sh@10 -- # set +x
00:12:40.481  ************************************
00:12:40.481  END TEST nvme_overhead
00:12:40.481  ************************************
00:12:40.481   00:43:29	-- nvme/nvme.sh@93 -- # run_test nvme_arbitration /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0
00:12:40.481   00:43:29	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:12:40.481   00:43:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:40.481   00:43:29	-- common/autotest_common.sh@10 -- # set +x
00:12:40.481  ************************************
00:12:40.481  START TEST nvme_arbitration
00:12:40.481  ************************************
00:12:40.481   00:43:29	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0
00:12:44.674  Initializing NVMe Controllers
00:12:44.674  Attached to 0000:5e:00.0
00:12:44.674  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 0
00:12:44.674  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 1
00:12:44.674  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 2
00:12:44.674  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 3
00:12:44.674  /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration run with configuration:
00:12:44.674  /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:12:44.674  Initialization complete. Launching workers.
00:12:44.674  Starting thread on core 1 with urgent priority queue
00:12:44.674  Starting thread on core 2 with urgent priority queue
00:12:44.674  Starting thread on core 3 with urgent priority queue
00:12:44.674  Starting thread on core 0 with urgent priority queue
00:12:44.674  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 0:  6903.33 IO/s    14.49 secs/100000 ios
00:12:44.674  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 1:  6958.67 IO/s    14.37 secs/100000 ios
00:12:44.674  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 2:  5764.67 IO/s    17.35 secs/100000 ios
00:12:44.674  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 3:  5454.67 IO/s    18.33 secs/100000 ios
00:12:44.674  ========================================================
00:12:44.674  
00:12:44.674  
00:12:44.674  real	0m3.354s
00:12:44.674  user	0m9.165s
00:12:44.674  sys	0m0.164s
00:12:44.674   00:43:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:44.674   00:43:33	-- common/autotest_common.sh@10 -- # set +x
00:12:44.674  ************************************
00:12:44.674  END TEST nvme_arbitration
00:12:44.674  ************************************
00:12:44.674   00:43:33	-- nvme/nvme.sh@94 -- # run_test nvme_single_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 -L log
00:12:44.674   00:43:33	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:12:44.674   00:43:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:44.674   00:43:33	-- common/autotest_common.sh@10 -- # set +x
00:12:44.674  ************************************
00:12:44.674  START TEST nvme_single_aen
00:12:44.674  ************************************
00:12:44.674   00:43:33	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 -L log
00:12:44.674  [2024-12-17 00:43:33.143303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:44.674  [2024-12-17 00:43:33.143369] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:44.674  [2024-12-17 00:43:33.388178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:12:44.674  [2024-12-17 00:43:33.388218] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 987424) is not found. Dropping the request.
00:12:44.674  [2024-12-17 00:43:33.388244] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 987424) is not found. Dropping the request.
00:12:44.674  [2024-12-17 00:43:33.388260] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 987424) is not found. Dropping the request.
00:12:44.674  [2024-12-17 00:43:33.388276] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 987424) is not found. Dropping the request.
00:12:49.947  Asynchronous Event Request test
00:12:49.947  Attached to 0000:5e:00.0
00:12:49.947  Reset controller to setup AER completions for this process
00:12:49.947  Registering asynchronous event callbacks...
00:12:49.947  Getting orig temperature thresholds of all controllers
00:12:49.947  0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:12:49.947  Setting all controllers temperature threshold low to trigger AER
00:12:49.947  Waiting for all controllers temperature threshold to be set lower
00:12:49.947  Waiting for all controllers to trigger AER and reset threshold
00:12:49.947  0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:12:49.947  aer_cb - Resetting Temp Threshold for device: 0000:5e:00.0
00:12:49.947  0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:12:49.947  Cleaning up...
00:12:49.947  
00:12:49.947  real	0m5.553s
00:12:49.947  user	0m4.603s
00:12:49.947  sys	0m0.887s
00:12:49.947   00:43:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:49.947   00:43:38	-- common/autotest_common.sh@10 -- # set +x
00:12:49.947  ************************************
00:12:49.947  END TEST nvme_single_aen
00:12:49.947  ************************************
00:12:49.947   00:43:38	-- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:12:49.947   00:43:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:49.947   00:43:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:49.947   00:43:38	-- common/autotest_common.sh@10 -- # set +x
00:12:49.947  ************************************
00:12:49.947  START TEST nvme_doorbell_aers
00:12:49.947  ************************************
00:12:49.947   00:43:38	-- common/autotest_common.sh@1114 -- # nvme_doorbell_aers
00:12:49.947   00:43:38	-- nvme/nvme.sh@70 -- # bdfs=()
00:12:49.947   00:43:38	-- nvme/nvme.sh@70 -- # local bdfs bdf
00:12:49.947   00:43:38	-- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:12:49.947    00:43:38	-- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:12:49.947    00:43:38	-- common/autotest_common.sh@1508 -- # bdfs=()
00:12:49.947    00:43:38	-- common/autotest_common.sh@1508 -- # local bdfs
00:12:49.947    00:43:38	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:49.947     00:43:38	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:12:49.947     00:43:38	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:12:49.947    00:43:38	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:12:49.947    00:43:38	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:12:49.947   00:43:38	-- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:12:49.947   00:43:38	-- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:5e:00.0'
00:12:50.206  [2024-12-17 00:43:39.210583] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 991059) is not found. Dropping the request.
00:13:00.188  Executing: test_write_invalid_db
00:13:00.188  Waiting for AER completion...
00:13:00.188  Failure: test_write_invalid_db
00:13:00.188  
00:13:00.188  Executing: test_invalid_db_write_overflow_sq
00:13:00.188  Waiting for AER completion...
00:13:00.188  Failure: test_invalid_db_write_overflow_sq
00:13:00.188  
00:13:00.188  Executing: test_invalid_db_write_overflow_cq
00:13:00.188  Waiting for AER completion...
00:13:00.188  Failure: test_invalid_db_write_overflow_cq
00:13:00.188  
00:13:00.188  
00:13:00.188  real	0m10.125s
00:13:00.188  user	0m7.179s
00:13:00.188  sys	0m2.835s
00:13:00.188   00:43:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:00.188   00:43:48	-- common/autotest_common.sh@10 -- # set +x
00:13:00.188  ************************************
00:13:00.188  END TEST nvme_doorbell_aers
00:13:00.188  ************************************
00:13:00.188    00:43:48	-- nvme/nvme.sh@97 -- # uname
00:13:00.188   00:43:48	-- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:13:00.188   00:43:48	-- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:13:00.188   00:43:48	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:13:00.188   00:43:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:00.188   00:43:48	-- common/autotest_common.sh@10 -- # set +x
00:13:00.188  ************************************
00:13:00.188  START TEST nvme_multi_aen
00:13:00.188  ************************************
00:13:00.188   00:43:48	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:13:00.188  [2024-12-17 00:43:48.930255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:00.188  [2024-12-17 00:43:48.930302] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:00.188  [2024-12-17 00:43:49.203349] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:13:00.188  [2024-12-17 00:43:49.203392] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 991059) is not found. Dropping the request.
00:13:00.188  [2024-12-17 00:43:49.203418] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 991059) is not found. Dropping the request.
00:13:00.188  [2024-12-17 00:43:49.203435] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 991059) is not found. Dropping the request.
00:13:00.188  [2024-12-17 00:43:49.207644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:00.188  [2024-12-17 00:43:49.207746] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:00.188  Child process pid: 993198
00:13:05.461  [Child] Asynchronous Event Request test
00:13:05.461  [Child] Attached to 0000:5e:00.0
00:13:05.461  [Child] Registering asynchronous event callbacks...
00:13:05.461  [Child] Getting orig temperature thresholds of all controllers
00:13:05.461  [Child] 0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:13:05.461  [Child] Waiting for all controllers to trigger AER and reset threshold
00:13:05.461  [Child] 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:13:05.461  [Child] 0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:13:05.461  [Child] 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:13:05.461  [Child] Cleaning up...
00:13:05.461  [Child] 0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:13:05.461  Asynchronous Event Request test
00:13:05.461  Attached to 0000:5e:00.0
00:13:05.461  Reset controller to setup AER completions for this process
00:13:05.461  Registering asynchronous event callbacks...
00:13:05.461  Getting orig temperature thresholds of all controllers
00:13:05.461  0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:13:05.461  Setting all controllers temperature threshold low to trigger AER
00:13:05.461  Waiting for all controllers temperature threshold to be set lower
00:13:05.461  Waiting for all controllers to trigger AER and reset threshold
00:13:05.461  0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:13:05.461  aer_cb - Resetting Temp Threshold for device: 0000:5e:00.0
00:13:05.461  0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:13:05.461  Cleaning up...
00:13:05.461  
00:13:05.461  real	0m4.799s
00:13:05.461  user	0m3.708s
00:13:05.461  sys	0m2.081s
00:13:05.461   00:43:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:05.461   00:43:53	-- common/autotest_common.sh@10 -- # set +x
00:13:05.461  ************************************
00:13:05.461  END TEST nvme_multi_aen
00:13:05.461  ************************************
00:13:05.461   00:43:53	-- nvme/nvme.sh@99 -- # run_test nvme_startup /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000
00:13:05.461   00:43:53	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:13:05.461   00:43:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:05.461   00:43:53	-- common/autotest_common.sh@10 -- # set +x
00:13:05.461  ************************************
00:13:05.461  START TEST nvme_startup
00:13:05.461  ************************************
00:13:05.461   00:43:53	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000
00:13:05.461  Initializing NVMe Controllers
00:13:05.461  Attached to 0000:5e:00.0
00:13:05.461  Initialization complete.
00:13:05.461  Time used:268702.281      (us).
00:13:05.462  
00:13:05.462  real	0m0.317s
00:13:05.462  user	0m0.074s
00:13:05.462  sys	0m0.181s
00:13:05.462   00:43:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:05.462   00:43:54	-- common/autotest_common.sh@10 -- # set +x
00:13:05.462  ************************************
00:13:05.462  END TEST nvme_startup
00:13:05.462  ************************************
00:13:05.462   00:43:54	-- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:13:05.462   00:43:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:05.462   00:43:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:05.462   00:43:54	-- common/autotest_common.sh@10 -- # set +x
00:13:05.462  ************************************
00:13:05.462  START TEST nvme_multi_secondary
00:13:05.462  ************************************
00:13:05.462   00:43:54	-- common/autotest_common.sh@1114 -- # nvme_multi_secondary
00:13:05.462   00:43:54	-- nvme/nvme.sh@52 -- # pid0=993845
00:13:05.462   00:43:54	-- nvme/nvme.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:13:05.462   00:43:54	-- nvme/nvme.sh@54 -- # pid1=993846
00:13:05.462   00:43:54	-- nvme/nvme.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:13:05.462   00:43:54	-- nvme/nvme.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:13:08.759  Initializing NVMe Controllers
00:13:08.759  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:13:08.759  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 1
00:13:08.759  Initialization complete. Launching workers.
00:13:08.759  ========================================================
00:13:08.759                                                                             Latency(us)
00:13:08.759  Device Information                     :       IOPS      MiB/s    Average        min        max
00:13:08.759  PCIE (0000:5e:00.0) NSID 1 from core  1:   76415.67     298.50     209.08      26.38    3611.00
00:13:08.759  ========================================================
00:13:08.759  Total                                  :   76415.67     298.50     209.08      26.38    3611.00
00:13:08.759  
00:13:08.759  Initializing NVMe Controllers
00:13:08.759  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:13:08.759  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 2
00:13:08.759  Initialization complete. Launching workers.
00:13:08.760  ========================================================
00:13:08.760                                                                             Latency(us)
00:13:08.760  Device Information                     :       IOPS      MiB/s    Average        min        max
00:13:08.760  PCIE (0000:5e:00.0) NSID 1 from core  2:   38445.60     150.18     415.81      23.88    6736.04
00:13:08.760  ========================================================
00:13:08.760  Total                                  :   38445.60     150.18     415.81      23.88    6736.04
00:13:08.760  
00:13:08.760   00:43:57	-- nvme/nvme.sh@56 -- # wait 993845
00:13:10.206  Initializing NVMe Controllers
00:13:10.206  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:13:10.206  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:13:10.206  Initialization complete. Launching workers.
00:13:10.206  ========================================================
00:13:10.206                                                                             Latency(us)
00:13:10.206  Device Information                     :       IOPS      MiB/s    Average        min        max
00:13:10.206  PCIE (0000:5e:00.0) NSID 1 from core  0:   79205.41     309.40     201.68      54.32    3970.80
00:13:10.206  ========================================================
00:13:10.206  Total                                  :   79205.41     309.40     201.68      54.32    3970.80
00:13:10.206  
00:13:10.206   00:43:59	-- nvme/nvme.sh@57 -- # wait 993846
00:13:10.206   00:43:59	-- nvme/nvme.sh@61 -- # pid0=994566
00:13:10.206   00:43:59	-- nvme/nvme.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:13:10.207   00:43:59	-- nvme/nvme.sh@63 -- # pid1=994567
00:13:10.207   00:43:59	-- nvme/nvme.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:13:10.207   00:43:59	-- nvme/nvme.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:13:14.484  Initializing NVMe Controllers
00:13:14.484  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:13:14.484  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 1
00:13:14.484  Initialization complete. Launching workers.
00:13:14.484  ========================================================
00:13:14.484                                                                             Latency(us)
00:13:14.484  Device Information                     :       IOPS      MiB/s    Average        min        max
00:13:14.484  PCIE (0000:5e:00.0) NSID 1 from core  1:   78046.33     304.87     204.69      25.39    2847.47
00:13:14.484  ========================================================
00:13:14.484  Total                                  :   78046.33     304.87     204.69      25.39    2847.47
00:13:14.484  
00:13:14.484  Initializing NVMe Controllers
00:13:14.484  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:13:14.484  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:13:14.484  Initialization complete. Launching workers.
00:13:14.484  ========================================================
00:13:14.484                                                                             Latency(us)
00:13:14.484  Device Information                     :       IOPS      MiB/s    Average        min        max
00:13:14.484  PCIE (0000:5e:00.0) NSID 1 from core  0:   78156.92     305.30     204.39      24.80    3480.21
00:13:14.484  ========================================================
00:13:14.484  Total                                  :   78156.92     305.30     204.39      24.80    3480.21
00:13:14.484  
00:13:15.862  Initializing NVMe Controllers
00:13:15.862  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:13:15.862  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 2
00:13:15.862  Initialization complete. Launching workers.
00:13:15.862  ========================================================
00:13:15.862                                                                             Latency(us)
00:13:15.862  Device Information                     :       IOPS      MiB/s    Average        min        max
00:13:15.862  PCIE (0000:5e:00.0) NSID 1 from core  2:   42050.96     164.26     379.96      23.17    6279.74
00:13:15.862  ========================================================
00:13:15.862  Total                                  :   42050.96     164.26     379.96      23.17    6279.74
00:13:15.862  
00:13:15.862   00:44:04	-- nvme/nvme.sh@65 -- # wait 994566
00:13:15.862   00:44:04	-- nvme/nvme.sh@66 -- # wait 994567
00:13:15.862  
00:13:15.862  real	0m10.714s
00:13:15.862  user	0m18.433s
00:13:15.862  sys	0m1.163s
00:13:15.862   00:44:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:15.862   00:44:04	-- common/autotest_common.sh@10 -- # set +x
00:13:15.862  ************************************
00:13:15.862  END TEST nvme_multi_secondary
00:13:15.862  ************************************
00:13:15.862   00:44:04	-- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:13:15.862   00:44:04	-- nvme/nvme.sh@102 -- # kill_stub
00:13:15.862   00:44:04	-- common/autotest_common.sh@1075 -- # [[ -e /proc/986749 ]]
00:13:15.862   00:44:04	-- common/autotest_common.sh@1076 -- # kill 986749
00:13:15.862   00:44:04	-- common/autotest_common.sh@1077 -- # wait 986749
00:13:16.797  [2024-12-17 00:44:05.744657] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 993183) is not found. Dropping the request.
00:13:16.797  [2024-12-17 00:44:05.744773] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 993183) is not found. Dropping the request.
00:13:16.797  [2024-12-17 00:44:05.744812] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 993183) is not found. Dropping the request.
00:13:16.797  [2024-12-17 00:44:05.744850] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 993183) is not found. Dropping the request.
00:13:20.990   00:44:09	-- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0
00:13:20.990   00:44:09	-- common/autotest_common.sh@1083 -- # echo 2
00:13:20.990   00:44:09	-- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:13:20.990   00:44:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:20.990   00:44:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:20.990   00:44:09	-- common/autotest_common.sh@10 -- # set +x
00:13:20.990  ************************************
00:13:20.990  START TEST bdev_nvme_reset_stuck_adm_cmd
00:13:20.990  ************************************
00:13:20.990   00:44:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:13:20.990  * Looking for test storage...
00:13:20.990  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:13:20.990    00:44:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:20.990     00:44:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:20.990     00:44:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:20.990    00:44:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:20.990    00:44:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:20.990    00:44:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:20.990    00:44:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:20.990    00:44:09	-- scripts/common.sh@335 -- # IFS=.-:
00:13:20.990    00:44:09	-- scripts/common.sh@335 -- # read -ra ver1
00:13:20.990    00:44:09	-- scripts/common.sh@336 -- # IFS=.-:
00:13:20.990    00:44:09	-- scripts/common.sh@336 -- # read -ra ver2
00:13:20.990    00:44:09	-- scripts/common.sh@337 -- # local 'op=<'
00:13:20.990    00:44:09	-- scripts/common.sh@339 -- # ver1_l=2
00:13:20.990    00:44:09	-- scripts/common.sh@340 -- # ver2_l=1
00:13:20.990    00:44:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:20.990    00:44:09	-- scripts/common.sh@343 -- # case "$op" in
00:13:20.990    00:44:09	-- scripts/common.sh@344 -- # : 1
00:13:20.990    00:44:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:20.990    00:44:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:20.990     00:44:09	-- scripts/common.sh@364 -- # decimal 1
00:13:20.990     00:44:09	-- scripts/common.sh@352 -- # local d=1
00:13:20.990     00:44:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:20.990     00:44:09	-- scripts/common.sh@354 -- # echo 1
00:13:20.990    00:44:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:20.990     00:44:09	-- scripts/common.sh@365 -- # decimal 2
00:13:20.990     00:44:09	-- scripts/common.sh@352 -- # local d=2
00:13:20.990     00:44:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:20.990     00:44:09	-- scripts/common.sh@354 -- # echo 2
00:13:20.990    00:44:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:20.990    00:44:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:20.990    00:44:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:20.990    00:44:09	-- scripts/common.sh@367 -- # return 0
00:13:20.990    00:44:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:20.990    00:44:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:20.990  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:20.990  		--rc genhtml_branch_coverage=1
00:13:20.990  		--rc genhtml_function_coverage=1
00:13:20.990  		--rc genhtml_legend=1
00:13:20.990  		--rc geninfo_all_blocks=1
00:13:20.990  		--rc geninfo_unexecuted_blocks=1
00:13:20.990  		
00:13:20.990  		'
00:13:20.990    00:44:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:20.990  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:20.990  		--rc genhtml_branch_coverage=1
00:13:20.990  		--rc genhtml_function_coverage=1
00:13:20.990  		--rc genhtml_legend=1
00:13:20.990  		--rc geninfo_all_blocks=1
00:13:20.990  		--rc geninfo_unexecuted_blocks=1
00:13:20.990  		
00:13:20.990  		'
00:13:20.990    00:44:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:20.990  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:20.990  		--rc genhtml_branch_coverage=1
00:13:20.990  		--rc genhtml_function_coverage=1
00:13:20.990  		--rc genhtml_legend=1
00:13:20.990  		--rc geninfo_all_blocks=1
00:13:20.990  		--rc geninfo_unexecuted_blocks=1
00:13:20.990  		
00:13:20.990  		'
00:13:20.990    00:44:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:20.990  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:20.990  		--rc genhtml_branch_coverage=1
00:13:20.990  		--rc genhtml_function_coverage=1
00:13:20.990  		--rc genhtml_legend=1
00:13:20.990  		--rc geninfo_all_blocks=1
00:13:20.990  		--rc geninfo_unexecuted_blocks=1
00:13:20.990  		
00:13:20.990  		'
00:13:20.990   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:13:20.990   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:13:20.990   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:13:20.990   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:13:20.990   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:13:20.990    00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:13:20.990    00:44:09	-- common/autotest_common.sh@1519 -- # bdfs=()
00:13:20.990    00:44:09	-- common/autotest_common.sh@1519 -- # local bdfs
00:13:20.990    00:44:09	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:13:20.990     00:44:09	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:13:20.990     00:44:09	-- common/autotest_common.sh@1508 -- # bdfs=()
00:13:20.990     00:44:09	-- common/autotest_common.sh@1508 -- # local bdfs
00:13:20.990     00:44:09	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:13:20.990      00:44:09	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:13:20.990      00:44:09	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:13:20.990     00:44:09	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:13:20.990     00:44:09	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:13:20.990    00:44:09	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:13:20.991   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:5e:00.0
00:13:20.991   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:5e:00.0 ']'
00:13:20.991   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=996091
00:13:20.991   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:13:20.991   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0xF
00:13:20.991   00:44:09	-- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 996091
00:13:20.991   00:44:09	-- common/autotest_common.sh@829 -- # '[' -z 996091 ']'
00:13:20.991   00:44:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:20.991   00:44:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:20.991   00:44:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:20.991  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:20.991   00:44:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:20.991   00:44:09	-- common/autotest_common.sh@10 -- # set +x
00:13:20.991  [2024-12-17 00:44:09.879625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:20.991  [2024-12-17 00:44:09.879697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996091 ]
00:13:20.991  EAL: No free 2048 kB hugepages reported on node 1
00:13:20.991  [2024-12-17 00:44:10.011072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:20.991  [2024-12-17 00:44:10.067158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:13:20.991  [2024-12-17 00:44:10.067344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:20.991  [2024-12-17 00:44:10.067363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:13:20.991  [2024-12-17 00:44:10.067491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:20.991  [2024-12-17 00:44:10.067491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:13:20.991  [2024-12-17 00:44:10.247161] 'OCF_Core' volume operations registered
00:13:20.991  [2024-12-17 00:44:10.249583] 'OCF_Cache' volume operations registered
00:13:21.250  [2024-12-17 00:44:10.252503] 'OCF Composite' volume operations registered
00:13:21.250  [2024-12-17 00:44:10.254931] 'SPDK_block_device' volume operations registered
00:13:21.819   00:44:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:21.819   00:44:10	-- common/autotest_common.sh@862 -- # return 0
00:13:21.819   00:44:10	-- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:5e:00.0
00:13:21.819   00:44:10	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:21.819   00:44:10	-- common/autotest_common.sh@10 -- # set +x
00:13:25.107  nvme0n1
00:13:25.107   00:44:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:25.107    00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:13:25.107   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_OZRzX.txt
00:13:25.107   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:13:25.107   00:44:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:25.107   00:44:13	-- common/autotest_common.sh@10 -- # set +x
00:13:25.107  true
00:13:25.107   00:44:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:25.107    00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:13:25.107   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734392653
00:13:25.107   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=996635
00:13:25.107   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:13:25.108   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:13:25.108   00:44:13	-- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:13:26.484   00:44:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:13:26.484   00:44:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:26.484   00:44:15	-- common/autotest_common.sh@10 -- # set +x
00:13:26.484  [2024-12-17 00:44:15.737427] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:13:26.484  [2024-12-17 00:44:15.737654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:13:26.484  [2024-12-17 00:44:15.737676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:13:26.484  [2024-12-17 00:44:15.737697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:26.484  [2024-12-17 00:44:15.738900] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:13:26.484   00:44:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:26.484   00:44:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 996635
00:13:26.484  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 996635
00:13:26.484   00:44:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 996635
00:13:26.743    00:44:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:13:26.743   00:44:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:13:26.743   00:44:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:13:26.743   00:44:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:13:26.743   00:44:15	-- common/autotest_common.sh@10 -- # set +x
00:13:30.935   00:44:19	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:13:30.935   00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_OZRzX.txt
00:13:30.935   00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:13:30.935     00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:13:30.935     00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:13:30.935      00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:13:30.935   00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:13:30.935     00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:13:30.935     00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:13:30.935      00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:13:30.935    00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:13:30.935   00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:13:30.935   00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_OZRzX.txt
00:13:30.935   00:44:19	-- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 996091
00:13:30.935   00:44:19	-- common/autotest_common.sh@936 -- # '[' -z 996091 ']'
00:13:30.935   00:44:19	-- common/autotest_common.sh@940 -- # kill -0 996091
00:13:30.935    00:44:19	-- common/autotest_common.sh@941 -- # uname
00:13:30.935   00:44:19	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:30.935    00:44:19	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 996091
00:13:30.935   00:44:19	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:30.935   00:44:19	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:30.935   00:44:19	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 996091'
00:13:30.935  killing process with pid 996091
00:13:30.935   00:44:19	-- common/autotest_common.sh@955 -- # kill 996091
00:13:30.935   00:44:19	-- common/autotest_common.sh@960 -- # wait 996091
00:13:30.935   00:44:20	-- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:13:30.935   00:44:20	-- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:13:30.935  
00:13:30.935  real	0m10.560s
00:13:30.935  user	0m39.570s
00:13:30.935  sys	0m0.927s
00:13:30.935   00:44:20	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:30.935   00:44:20	-- common/autotest_common.sh@10 -- # set +x
00:13:30.935  ************************************
00:13:30.935  END TEST bdev_nvme_reset_stuck_adm_cmd
00:13:30.935  ************************************
00:13:30.935   00:44:20	-- nvme/nvme.sh@107 -- # [[ y == y ]]
00:13:30.935   00:44:20	-- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:13:30.935   00:44:20	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:30.935   00:44:20	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:30.935   00:44:20	-- common/autotest_common.sh@10 -- # set +x
00:13:30.935  ************************************
00:13:30.935  START TEST nvme_fio
00:13:30.935  ************************************
00:13:30.935   00:44:20	-- common/autotest_common.sh@1114 -- # nvme_fio_test
00:13:30.935   00:44:20	-- nvme/nvme.sh@31 -- # PLUGIN_DIR=/var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme
00:13:30.935   00:44:20	-- nvme/nvme.sh@32 -- # ran_fio=false
00:13:30.935    00:44:20	-- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:13:30.935    00:44:20	-- common/autotest_common.sh@1508 -- # bdfs=()
00:13:30.935    00:44:20	-- common/autotest_common.sh@1508 -- # local bdfs
00:13:30.935    00:44:20	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:13:30.935     00:44:20	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:13:30.935     00:44:20	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:13:31.194    00:44:20	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:13:31.194    00:44:20	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:13:31.194   00:44:20	-- nvme/nvme.sh@33 -- # bdfs=('0000:5e:00.0')
00:13:31.194   00:44:20	-- nvme/nvme.sh@33 -- # local bdfs bdf
00:13:31.194   00:44:20	-- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:13:31.194   00:44:20	-- nvme/nvme.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0'
00:13:31.194   00:44:20	-- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:13:31.194  EAL: No free 2048 kB hugepages reported on node 1
00:13:37.765   00:44:26	-- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:13:37.765   00:44:26	-- nvme/nvme.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0'
00:13:37.765  EAL: No free 2048 kB hugepages reported on node 1
00:13:44.329   00:44:33	-- nvme/nvme.sh@41 -- # bs=4096
00:13:44.329   00:44:33	-- nvme/nvme.sh@43 -- # fio_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096
00:13:44.329   00:44:33	-- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096
00:13:44.329   00:44:33	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:13:44.329   00:44:33	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:13:44.329   00:44:33	-- common/autotest_common.sh@1328 -- # local sanitizers
00:13:44.329   00:44:33	-- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme
00:13:44.329   00:44:33	-- common/autotest_common.sh@1330 -- # shift
00:13:44.329   00:44:33	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:13:44.329   00:44:33	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:13:44.329    00:44:33	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme
00:13:44.329    00:44:33	-- common/autotest_common.sh@1334 -- # grep libasan
00:13:44.329    00:44:33	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:13:44.329   00:44:33	-- common/autotest_common.sh@1334 -- # asan_lib=
00:13:44.329   00:44:33	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:13:44.329   00:44:33	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:13:44.329    00:44:33	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme
00:13:44.329    00:44:33	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:13:44.329    00:44:33	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:13:44.329   00:44:33	-- common/autotest_common.sh@1334 -- # asan_lib=
00:13:44.329   00:44:33	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:13:44.329   00:44:33	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme'
00:13:44.329   00:44:33	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096
00:13:44.329  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:13:44.329  fio-3.35
00:13:44.329  Starting 1 thread
00:13:44.588  EAL: No free 2048 kB hugepages reported on node 1
00:13:54.560  
00:13:54.560  test: (groupid=0, jobs=1): err= 0: pid=999729: Tue Dec 17 00:44:42 2024
00:13:54.560    read: IOPS=56.0k, BW=219MiB/s (230MB/s)(438MiB/2001msec)
00:13:54.560      slat (nsec): min=4545, max=109509, avg=4803.27, stdev=506.76
00:13:54.560      clat (usec): min=199, max=1655, avg=1124.23, stdev=16.26
00:13:54.560       lat (usec): min=204, max=1661, avg=1129.03, stdev=16.27
00:13:54.560      clat percentiles (usec):
00:13:54.560       |  1.00th=[ 1106],  5.00th=[ 1123], 10.00th=[ 1123], 20.00th=[ 1123],
00:13:54.560       | 30.00th=[ 1123], 40.00th=[ 1123], 50.00th=[ 1123], 60.00th=[ 1123],
00:13:54.560       | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1123], 95.00th=[ 1139],
00:13:54.560       | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1237],
00:13:54.560       | 99.99th=[ 1303]
00:13:54.560     bw (  KiB/s): min=216296, max=228784, per=99.78%, avg=223685.33, stdev=6551.56, samples=3
00:13:54.560     iops        : min=54076, max=57196, avg=55922.00, stdev=1636.76, samples=3
00:13:54.560    write: IOPS=55.9k, BW=218MiB/s (229MB/s)(437MiB/2001msec); 0 zone resets
00:13:54.560      slat (nsec): min=4610, max=16904, avg=4887.95, stdev=400.09
00:13:54.560      clat (usec): min=208, max=1416, avg=1124.55, stdev=16.58
00:13:54.560       lat (usec): min=212, max=1421, avg=1129.43, stdev=16.60
00:13:54.560      clat percentiles (usec):
00:13:54.560       |  1.00th=[ 1106],  5.00th=[ 1123], 10.00th=[ 1123], 20.00th=[ 1123],
00:13:54.560       | 30.00th=[ 1123], 40.00th=[ 1123], 50.00th=[ 1123], 60.00th=[ 1123],
00:13:54.560       | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1139],
00:13:54.560       | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1237],
00:13:54.560       | 99.99th=[ 1352]
00:13:54.560     bw (  KiB/s): min=216128, max=226704, per=99.67%, avg=222850.67, stdev=5842.75, samples=3
00:13:54.560     iops        : min=54032, max=56676, avg=55712.67, stdev=1460.69, samples=3
00:13:54.560    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.05%
00:13:54.560    lat (msec)   : 2=99.91%
00:13:54.560    cpu          : usr=99.50%, sys=0.05%, ctx=1, majf=0, minf=5
00:13:54.560    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:13:54.560       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:54.560       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:13:54.560       issued rwts: total=112145,111854,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:54.560       latency   : target=0, window=0, percentile=100.00%, depth=128
00:13:54.560  
00:13:54.560  Run status group 0 (all jobs):
00:13:54.560     READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=438MiB (459MB), run=2001-2001msec
00:13:54.560    WRITE: bw=218MiB/s (229MB/s), 218MiB/s-218MiB/s (229MB/s-229MB/s), io=437MiB (458MB), run=2001-2001msec
00:13:54.560   00:44:42	-- nvme/nvme.sh@44 -- # ran_fio=true
00:13:54.560   00:44:42	-- nvme/nvme.sh@46 -- # true
00:13:54.560  
00:13:54.560  real	0m22.230s
00:13:54.560  user	0m20.757s
00:13:54.560  sys	0m2.299s
00:13:54.560   00:44:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:54.560   00:44:42	-- common/autotest_common.sh@10 -- # set +x
00:13:54.560  ************************************
00:13:54.560  END TEST nvme_fio
00:13:54.560  ************************************
00:13:54.560  
00:13:54.560  real	1m46.704s
00:13:54.560  user	4m4.432s
00:13:54.560  sys	0m17.470s
00:13:54.560   00:44:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:54.560   00:44:42	-- common/autotest_common.sh@10 -- # set +x
00:13:54.560  ************************************
00:13:54.560  END TEST nvme
00:13:54.560  ************************************
00:13:54.560   00:44:42	-- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]]
00:13:54.560   00:44:42	-- spdk/autotest.sh@214 -- # run_test nvme_scc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh
00:13:54.560   00:44:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:54.560   00:44:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:54.560   00:44:42	-- common/autotest_common.sh@10 -- # set +x
00:13:54.560  ************************************
00:13:54.560  START TEST nvme_scc
00:13:54.560  ************************************
00:13:54.560   00:44:42	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh
00:13:54.560  * Looking for test storage...
00:13:54.561  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:13:54.561     00:44:42	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:54.561      00:44:42	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:54.561      00:44:42	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:54.561     00:44:42	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:54.561     00:44:42	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:54.561     00:44:42	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:54.561     00:44:42	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:54.561     00:44:42	-- scripts/common.sh@335 -- # IFS=.-:
00:13:54.561     00:44:42	-- scripts/common.sh@335 -- # read -ra ver1
00:13:54.561     00:44:42	-- scripts/common.sh@336 -- # IFS=.-:
00:13:54.561     00:44:42	-- scripts/common.sh@336 -- # read -ra ver2
00:13:54.561     00:44:42	-- scripts/common.sh@337 -- # local 'op=<'
00:13:54.561     00:44:42	-- scripts/common.sh@339 -- # ver1_l=2
00:13:54.561     00:44:42	-- scripts/common.sh@340 -- # ver2_l=1
00:13:54.561     00:44:42	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:54.561     00:44:42	-- scripts/common.sh@343 -- # case "$op" in
00:13:54.561     00:44:42	-- scripts/common.sh@344 -- # : 1
00:13:54.561     00:44:42	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:54.561     00:44:42	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:54.561      00:44:42	-- scripts/common.sh@364 -- # decimal 1
00:13:54.561      00:44:42	-- scripts/common.sh@352 -- # local d=1
00:13:54.561      00:44:42	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.561      00:44:42	-- scripts/common.sh@354 -- # echo 1
00:13:54.561     00:44:42	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:54.561      00:44:42	-- scripts/common.sh@365 -- # decimal 2
00:13:54.561      00:44:42	-- scripts/common.sh@352 -- # local d=2
00:13:54.561      00:44:42	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:54.561      00:44:42	-- scripts/common.sh@354 -- # echo 2
00:13:54.561     00:44:42	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:54.561     00:44:42	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:54.561     00:44:42	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:54.561     00:44:42	-- scripts/common.sh@367 -- # return 0
00:13:54.561     00:44:42	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:54.561     00:44:42	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:54.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.561  		--rc genhtml_branch_coverage=1
00:13:54.561  		--rc genhtml_function_coverage=1
00:13:54.561  		--rc genhtml_legend=1
00:13:54.561  		--rc geninfo_all_blocks=1
00:13:54.561  		--rc geninfo_unexecuted_blocks=1
00:13:54.561  		
00:13:54.561  		'
00:13:54.561     00:44:42	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:54.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.561  		--rc genhtml_branch_coverage=1
00:13:54.561  		--rc genhtml_function_coverage=1
00:13:54.561  		--rc genhtml_legend=1
00:13:54.561  		--rc geninfo_all_blocks=1
00:13:54.561  		--rc geninfo_unexecuted_blocks=1
00:13:54.561  		
00:13:54.561  		'
00:13:54.561     00:44:42	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:54.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.561  		--rc genhtml_branch_coverage=1
00:13:54.561  		--rc genhtml_function_coverage=1
00:13:54.561  		--rc genhtml_legend=1
00:13:54.561  		--rc geninfo_all_blocks=1
00:13:54.561  		--rc geninfo_unexecuted_blocks=1
00:13:54.561  		
00:13:54.561  		'
00:13:54.561     00:44:42	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:54.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.561  		--rc genhtml_branch_coverage=1
00:13:54.561  		--rc genhtml_function_coverage=1
00:13:54.561  		--rc genhtml_legend=1
00:13:54.561  		--rc geninfo_all_blocks=1
00:13:54.561  		--rc geninfo_unexecuted_blocks=1
00:13:54.561  		
00:13:54.561  		'
00:13:54.561    00:44:42	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:13:54.561       00:44:42	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:13:54.561      00:44:42	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:13:54.561     00:44:42	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:13:54.561     00:44:42	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:13:54.561      00:44:42	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:54.561      00:44:42	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:54.561      00:44:42	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:54.561       00:44:42	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:54.561       00:44:42	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:54.561       00:44:42	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:54.561       00:44:42	-- paths/export.sh@5 -- # export PATH
00:13:54.561       00:44:42	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:54.561     00:44:42	-- nvme/functions.sh@10 -- # ctrls=()
00:13:54.561     00:44:42	-- nvme/functions.sh@10 -- # declare -A ctrls
00:13:54.561     00:44:42	-- nvme/functions.sh@11 -- # nvmes=()
00:13:54.561     00:44:42	-- nvme/functions.sh@11 -- # declare -A nvmes
00:13:54.561     00:44:42	-- nvme/functions.sh@12 -- # bdfs=()
00:13:54.561     00:44:42	-- nvme/functions.sh@12 -- # declare -A bdfs
00:13:54.561     00:44:42	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:13:54.561     00:44:42	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:13:54.561     00:44:42	-- nvme/functions.sh@14 -- # nvme_name=
00:13:54.561    00:44:42	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:13:54.561    00:44:42	-- nvme/nvme_scc.sh@12 -- # uname
00:13:54.561   00:44:42	-- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:13:54.561   00:44:42	-- nvme/nvme_scc.sh@12 -- # [[ ............................... == QEMU ]]
00:13:54.561   00:44:42	-- nvme/nvme_scc.sh@12 -- # exit 0
00:13:54.561  
00:13:54.561  real	0m0.226s
00:13:54.561  user	0m0.124s
00:13:54.561  sys	0m0.119s
00:13:54.561   00:44:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:54.561   00:44:42	-- common/autotest_common.sh@10 -- # set +x
00:13:54.561  ************************************
00:13:54.561  END TEST nvme_scc
00:13:54.561  ************************************
00:13:54.561   00:44:42	-- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]]
00:13:54.561   00:44:42	-- spdk/autotest.sh@219 -- # [[ 1 -eq 1 ]]
00:13:54.561   00:44:42	-- spdk/autotest.sh@220 -- # run_test nvme_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh
00:13:54.561   00:44:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:54.561   00:44:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:54.561   00:44:42	-- common/autotest_common.sh@10 -- # set +x
00:13:54.561  ************************************
00:13:54.561  START TEST nvme_cuse
00:13:54.561  ************************************
00:13:54.561   00:44:42	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh
00:13:54.561  * Looking for test storage...
00:13:54.561  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:13:54.561    00:44:42	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:54.561     00:44:42	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:54.561     00:44:42	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:54.561    00:44:42	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:54.561    00:44:42	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:54.561    00:44:42	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:54.561    00:44:42	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:54.561    00:44:42	-- scripts/common.sh@335 -- # IFS=.-:
00:13:54.561    00:44:42	-- scripts/common.sh@335 -- # read -ra ver1
00:13:54.561    00:44:42	-- scripts/common.sh@336 -- # IFS=.-:
00:13:54.561    00:44:42	-- scripts/common.sh@336 -- # read -ra ver2
00:13:54.561    00:44:42	-- scripts/common.sh@337 -- # local 'op=<'
00:13:54.561    00:44:42	-- scripts/common.sh@339 -- # ver1_l=2
00:13:54.561    00:44:42	-- scripts/common.sh@340 -- # ver2_l=1
00:13:54.561    00:44:42	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:54.561    00:44:42	-- scripts/common.sh@343 -- # case "$op" in
00:13:54.561    00:44:42	-- scripts/common.sh@344 -- # : 1
00:13:54.561    00:44:42	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:54.561    00:44:42	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:54.561     00:44:42	-- scripts/common.sh@364 -- # decimal 1
00:13:54.561     00:44:42	-- scripts/common.sh@352 -- # local d=1
00:13:54.561     00:44:42	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:54.561     00:44:42	-- scripts/common.sh@354 -- # echo 1
00:13:54.561    00:44:42	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:54.561     00:44:42	-- scripts/common.sh@365 -- # decimal 2
00:13:54.561     00:44:42	-- scripts/common.sh@352 -- # local d=2
00:13:54.561     00:44:42	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:54.561     00:44:42	-- scripts/common.sh@354 -- # echo 2
00:13:54.561    00:44:42	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:54.561    00:44:42	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:54.561    00:44:42	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:54.561    00:44:42	-- scripts/common.sh@367 -- # return 0
00:13:54.561    00:44:42	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:54.561    00:44:42	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:54.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.561  		--rc genhtml_branch_coverage=1
00:13:54.561  		--rc genhtml_function_coverage=1
00:13:54.561  		--rc genhtml_legend=1
00:13:54.561  		--rc geninfo_all_blocks=1
00:13:54.561  		--rc geninfo_unexecuted_blocks=1
00:13:54.561  		
00:13:54.561  		'
00:13:54.561    00:44:42	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:54.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.562  		--rc genhtml_branch_coverage=1
00:13:54.562  		--rc genhtml_function_coverage=1
00:13:54.562  		--rc genhtml_legend=1
00:13:54.562  		--rc geninfo_all_blocks=1
00:13:54.562  		--rc geninfo_unexecuted_blocks=1
00:13:54.562  		
00:13:54.562  		'
00:13:54.562    00:44:42	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:54.562  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.562  		--rc genhtml_branch_coverage=1
00:13:54.562  		--rc genhtml_function_coverage=1
00:13:54.562  		--rc genhtml_legend=1
00:13:54.562  		--rc geninfo_all_blocks=1
00:13:54.562  		--rc geninfo_unexecuted_blocks=1
00:13:54.562  		
00:13:54.562  		'
00:13:54.562    00:44:42	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:54.562  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:54.562  		--rc genhtml_branch_coverage=1
00:13:54.562  		--rc genhtml_function_coverage=1
00:13:54.562  		--rc genhtml_legend=1
00:13:54.562  		--rc geninfo_all_blocks=1
00:13:54.562  		--rc geninfo_unexecuted_blocks=1
00:13:54.562  		
00:13:54.562  		'
00:13:54.562    00:44:42	-- cuse/nvme_cuse.sh@11 -- # uname
00:13:54.562   00:44:42	-- cuse/nvme_cuse.sh@11 -- # [[ Linux != \L\i\n\u\x ]]
00:13:54.562   00:44:42	-- cuse/nvme_cuse.sh@16 -- # modprobe cuse
00:13:54.562   00:44:42	-- cuse/nvme_cuse.sh@17 -- # run_test nvme_cuse_app /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse
00:13:54.562   00:44:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:54.562   00:44:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:54.562   00:44:42	-- common/autotest_common.sh@10 -- # set +x
00:13:54.562  ************************************
00:13:54.562  START TEST nvme_cuse_app
00:13:54.562  ************************************
00:13:54.562   00:44:42	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse
00:13:54.562  
00:13:54.562  
00:13:54.562       CUnit - A unit testing framework for C - Version 2.1-3
00:13:54.562       http://cunit.sourceforge.net/
00:13:54.562  
00:13:54.562  
00:13:54.562  Suite: nvme_cuse
00:14:06.768    Test: test_cuse_update ...passed
00:14:06.768  
00:14:06.768  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:14:06.768                suites      1      1    n/a      0        0
00:14:06.768                 tests      1      1      1      0        0
00:14:06.768               asserts    925    925    925      0      n/a
00:14:06.768  
00:14:06.768  Elapsed time =    0.024 seconds
00:14:06.768  
00:14:06.768  real	0m11.025s
00:14:06.768  user	0m0.011s
00:14:06.768  sys	0m0.023s
00:14:06.768   00:44:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:06.768   00:44:53	-- common/autotest_common.sh@10 -- # set +x
00:14:06.768  ************************************
00:14:06.768  END TEST nvme_cuse_app
00:14:06.768  ************************************
00:14:06.768   00:44:54	-- cuse/nvme_cuse.sh@18 -- # run_test nvme_cuse_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh
00:14:06.768   00:44:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:06.768   00:44:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:06.768   00:44:54	-- common/autotest_common.sh@10 -- # set +x
00:14:06.768  ************************************
00:14:06.768  START TEST nvme_cuse_rpc
00:14:06.768  ************************************
00:14:06.768   00:44:54	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh
00:14:06.768  * Looking for test storage...
00:14:06.768  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:14:06.768    00:44:54	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:06.768     00:44:54	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:06.768     00:44:54	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:06.768    00:44:54	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:06.768    00:44:54	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:06.768    00:44:54	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:06.768    00:44:54	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:06.768    00:44:54	-- scripts/common.sh@335 -- # IFS=.-:
00:14:06.768    00:44:54	-- scripts/common.sh@335 -- # read -ra ver1
00:14:06.768    00:44:54	-- scripts/common.sh@336 -- # IFS=.-:
00:14:06.768    00:44:54	-- scripts/common.sh@336 -- # read -ra ver2
00:14:06.768    00:44:54	-- scripts/common.sh@337 -- # local 'op=<'
00:14:06.768    00:44:54	-- scripts/common.sh@339 -- # ver1_l=2
00:14:06.768    00:44:54	-- scripts/common.sh@340 -- # ver2_l=1
00:14:06.768    00:44:54	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:06.768    00:44:54	-- scripts/common.sh@343 -- # case "$op" in
00:14:06.768    00:44:54	-- scripts/common.sh@344 -- # : 1
00:14:06.768    00:44:54	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:06.768    00:44:54	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:06.768     00:44:54	-- scripts/common.sh@364 -- # decimal 1
00:14:06.768     00:44:54	-- scripts/common.sh@352 -- # local d=1
00:14:06.768     00:44:54	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:06.768     00:44:54	-- scripts/common.sh@354 -- # echo 1
00:14:06.768    00:44:54	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:06.768     00:44:54	-- scripts/common.sh@365 -- # decimal 2
00:14:06.768     00:44:54	-- scripts/common.sh@352 -- # local d=2
00:14:06.768     00:44:54	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:06.768     00:44:54	-- scripts/common.sh@354 -- # echo 2
00:14:06.768    00:44:54	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:06.768    00:44:54	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:06.768    00:44:54	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:06.768    00:44:54	-- scripts/common.sh@367 -- # return 0
00:14:06.768    00:44:54	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:06.768    00:44:54	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:06.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.768  		--rc genhtml_branch_coverage=1
00:14:06.768  		--rc genhtml_function_coverage=1
00:14:06.768  		--rc genhtml_legend=1
00:14:06.768  		--rc geninfo_all_blocks=1
00:14:06.768  		--rc geninfo_unexecuted_blocks=1
00:14:06.768  		
00:14:06.768  		'
00:14:06.768    00:44:54	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:06.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.768  		--rc genhtml_branch_coverage=1
00:14:06.768  		--rc genhtml_function_coverage=1
00:14:06.768  		--rc genhtml_legend=1
00:14:06.768  		--rc geninfo_all_blocks=1
00:14:06.768  		--rc geninfo_unexecuted_blocks=1
00:14:06.768  		
00:14:06.768  		'
00:14:06.768    00:44:54	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:06.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.768  		--rc genhtml_branch_coverage=1
00:14:06.768  		--rc genhtml_function_coverage=1
00:14:06.768  		--rc genhtml_legend=1
00:14:06.768  		--rc geninfo_all_blocks=1
00:14:06.768  		--rc geninfo_unexecuted_blocks=1
00:14:06.768  		
00:14:06.768  		'
00:14:06.768    00:44:54	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:06.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:06.768  		--rc genhtml_branch_coverage=1
00:14:06.768  		--rc genhtml_function_coverage=1
00:14:06.768  		--rc genhtml_legend=1
00:14:06.768  		--rc geninfo_all_blocks=1
00:14:06.768  		--rc geninfo_unexecuted_blocks=1
00:14:06.768  		
00:14:06.768  		'
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:06.768    00:44:54	-- cuse/nvme_cuse_rpc.sh@13 -- # get_first_nvme_bdf
00:14:06.768    00:44:54	-- common/autotest_common.sh@1519 -- # bdfs=()
00:14:06.768    00:44:54	-- common/autotest_common.sh@1519 -- # local bdfs
00:14:06.768    00:44:54	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:14:06.768     00:44:54	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:14:06.768     00:44:54	-- common/autotest_common.sh@1508 -- # bdfs=()
00:14:06.768     00:44:54	-- common/autotest_common.sh@1508 -- # local bdfs
00:14:06.768     00:44:54	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:06.768      00:44:54	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:14:06.768      00:44:54	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:14:06.768     00:44:54	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:14:06.768     00:44:54	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:14:06.768    00:44:54	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@13 -- # bdf=0000:5e:00.0
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@14 -- # ctrlr_base=/dev/spdk/nvme
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@17 -- # spdk_tgt_pid=1002362
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@18 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:14:06.768   00:44:54	-- cuse/nvme_cuse_rpc.sh@20 -- # waitforlisten 1002362
00:14:06.768   00:44:54	-- common/autotest_common.sh@829 -- # '[' -z 1002362 ']'
00:14:06.768   00:44:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:06.768   00:44:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:06.768   00:44:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:06.768  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:06.768   00:44:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:06.768   00:44:54	-- common/autotest_common.sh@10 -- # set +x
00:14:06.768  [2024-12-17 00:44:54.366971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:06.768  [2024-12-17 00:44:54.367042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002362 ]
00:14:06.768  EAL: No free 2048 kB hugepages reported on node 1
00:14:06.768  [2024-12-17 00:44:54.472560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:06.768  [2024-12-17 00:44:54.520734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:06.768  [2024-12-17 00:44:54.520931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:06.768  [2024-12-17 00:44:54.520935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:06.768  [2024-12-17 00:44:54.675879] 'OCF_Core' volume operations registered
00:14:06.768  [2024-12-17 00:44:54.678125] 'OCF_Cache' volume operations registered
00:14:06.768  [2024-12-17 00:44:54.680769] 'OCF Composite' volume operations registered
00:14:06.768  [2024-12-17 00:44:54.682980] 'SPDK_block_device' volume operations registered
00:14:06.768   00:44:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:06.768   00:44:55	-- common/autotest_common.sh@862 -- # return 0
00:14:06.769   00:44:55	-- cuse/nvme_cuse_rpc.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:14:09.299  Nvme0n1
00:14:09.299   00:44:58	-- cuse/nvme_cuse_rpc.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:14:09.558  [2024-12-17 00:44:58.667514] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:09.558  [2024-12-17 00:44:58.667671] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:09.558  [2024-12-17 00:44:58.667784] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:09.558   00:44:58	-- cuse/nvme_cuse_rpc.sh@25 -- # sleep 5
00:14:14.841   00:45:03	-- cuse/nvme_cuse_rpc.sh@27 -- # '[' '!' -c /dev/spdk/nvme0 ']'
00:14:14.841   00:45:03	-- cuse/nvme_cuse_rpc.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:14:14.841  [
00:14:14.841    {
00:14:14.841      "name": "Nvme0n1",
00:14:14.841      "aliases": [
00:14:14.841        "aa5777e8-d621-4fce-9311-c8c569c2f230"
00:14:14.841      ],
00:14:14.841      "product_name": "NVMe disk",
00:14:14.841      "block_size": 512,
00:14:14.841      "num_blocks": 7814037168,
00:14:14.841      "uuid": "aa5777e8-d621-4fce-9311-c8c569c2f230",
00:14:14.841      "assigned_rate_limits": {
00:14:14.841        "rw_ios_per_sec": 0,
00:14:14.841        "rw_mbytes_per_sec": 0,
00:14:14.841        "r_mbytes_per_sec": 0,
00:14:14.841        "w_mbytes_per_sec": 0
00:14:14.841      },
00:14:14.841      "claimed": false,
00:14:14.841      "zoned": false,
00:14:14.841      "supported_io_types": {
00:14:14.841        "read": true,
00:14:14.841        "write": true,
00:14:14.841        "unmap": true,
00:14:14.841        "write_zeroes": true,
00:14:14.841        "flush": true,
00:14:14.841        "reset": true,
00:14:14.841        "compare": false,
00:14:14.841        "compare_and_write": false,
00:14:14.841        "abort": true,
00:14:14.841        "nvme_admin": true,
00:14:14.841        "nvme_io": true
00:14:14.841      },
00:14:14.841      "driver_specific": {
00:14:14.841        "nvme": [
00:14:14.841          {
00:14:14.841            "pci_address": "0000:5e:00.0",
00:14:14.841            "trid": {
00:14:14.841              "trtype": "PCIe",
00:14:14.841              "traddr": "0000:5e:00.0"
00:14:14.841            },
00:14:14.841            "cuse_device": "spdk/nvme0n1",
00:14:14.841            "ctrlr_data": {
00:14:14.841              "cntlid": 0,
00:14:14.841              "vendor_id": "0x8086",
00:14:14.841              "model_number": "INTEL SSDPE2KX040T8",
00:14:14.841              "serial_number": "BTLJ83030AK84P0DGN",
00:14:14.841              "firmware_revision": "VDV10184",
00:14:14.841              "oacs": {
00:14:14.841                "security": 0,
00:14:14.841                "format": 1,
00:14:14.841                "firmware": 1,
00:14:14.841                "ns_manage": 1
00:14:14.841              },
00:14:14.841              "multi_ctrlr": false,
00:14:14.841              "ana_reporting": false
00:14:14.841            },
00:14:14.841            "vs": {
00:14:14.841              "nvme_version": "1.2"
00:14:14.841            },
00:14:14.841            "ns_data": {
00:14:14.841              "id": 1,
00:14:14.841              "can_share": false
00:14:14.841            }
00:14:14.841          }
00:14:14.841        ],
00:14:14.841        "mp_policy": "active_passive"
00:14:14.841      }
00:14:14.841    }
00:14:14.841  ]
00:14:14.841   00:45:03	-- cuse/nvme_cuse_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers
00:14:15.100  [
00:14:15.100    {
00:14:15.100      "name": "Nvme0",
00:14:15.100      "ctrlrs": [
00:14:15.100        {
00:14:15.100          "state": "enabled",
00:14:15.100          "cuse_device": "spdk/nvme0",
00:14:15.100          "trid": {
00:14:15.100            "trtype": "PCIe",
00:14:15.100            "traddr": "0000:5e:00.0"
00:14:15.100          },
00:14:15.100          "cntlid": 0,
00:14:15.100          "host": {
00:14:15.100            "nqn": "nqn.2014-08.org.nvmexpress:uuid:098ad261-4601-432b-bdf9-b9c593561f44",
00:14:15.100            "addr": "",
00:14:15.100            "svcid": ""
00:14:15.100          }
00:14:15.100        }
00:14:15.100      ]
00:14:15.100    }
00:14:15.100  ]
00:14:15.100   00:45:04	-- cuse/nvme_cuse_rpc.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0
00:14:15.667   00:45:04	-- cuse/nvme_cuse_rpc.sh@35 -- # sleep 1
00:14:16.617   00:45:05	-- cuse/nvme_cuse_rpc.sh@36 -- # '[' -c /dev/spdk/nvme0 ']'
00:14:16.617   00:45:05	-- cuse/nvme_cuse_rpc.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0
00:14:16.877  [2024-12-17 00:45:05.945909] nvme_cuse.c:1343:spdk_nvme_cuse_unregister: *ERROR*: Cannot find associated CUSE device
00:14:16.877  request:
00:14:16.877  {
00:14:16.877    "name": "Nvme0",
00:14:16.877    "method": "bdev_nvme_cuse_unregister",
00:14:16.877    "req_id": 1
00:14:16.877  }
00:14:16.877  Got JSON-RPC error response
00:14:16.877  response:
00:14:16.877  {
00:14:16.877    "code": -19,
00:14:16.877    "message": "No such device"
00:14:16.877  }
00:14:16.877   00:45:05	-- cuse/nvme_cuse_rpc.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:14:17.135  [2024-12-17 00:45:06.204858] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:17.135  [2024-12-17 00:45:06.204999] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:17.135  [2024-12-17 00:45:06.205084] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:17.135   00:45:06	-- cuse/nvme_cuse_rpc.sh@44 -- # sleep 1
00:14:18.074   00:45:07	-- cuse/nvme_cuse_rpc.sh@46 -- # '[' '!' -c /dev/spdk/nvme0 ']'
00:14:18.074   00:45:07	-- cuse/nvme_cuse_rpc.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:14:18.332  [2024-12-17 00:45:07.463320] bdev_nvme_cuse_rpc.c:  57:rpc_nvme_cuse_register: *ERROR*: Failed to register CUSE devices: File exists
00:14:18.332  request:
00:14:18.332  {
00:14:18.332    "name": "Nvme0",
00:14:18.332    "method": "bdev_nvme_cuse_register",
00:14:18.332    "req_id": 1
00:14:18.332  }
00:14:18.332  Got JSON-RPC error response
00:14:18.332  response:
00:14:18.332  {
00:14:18.332    "code": -17,
00:14:18.332    "message": "File exists"
00:14:18.332  }
00:14:18.332   00:45:07	-- cuse/nvme_cuse_rpc.sh@52 -- # sleep 1
00:14:19.268   00:45:08	-- cuse/nvme_cuse_rpc.sh@54 -- # '[' -c /dev/spdk/nvme1 ']'
00:14:19.268   00:45:08	-- cuse/nvme_cuse_rpc.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:14:24.536   00:45:12	-- cuse/nvme_cuse_rpc.sh@60 -- # trap - SIGINT SIGTERM EXIT
00:14:24.536   00:45:12	-- cuse/nvme_cuse_rpc.sh@61 -- # killprocess 1002362
00:14:24.536   00:45:12	-- common/autotest_common.sh@936 -- # '[' -z 1002362 ']'
00:14:24.536   00:45:12	-- common/autotest_common.sh@940 -- # kill -0 1002362
00:14:24.536    00:45:12	-- common/autotest_common.sh@941 -- # uname
00:14:24.536   00:45:12	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:24.536    00:45:12	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1002362
00:14:24.536   00:45:12	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:24.536   00:45:12	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:24.536   00:45:12	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1002362'
00:14:24.536  killing process with pid 1002362
00:14:24.536   00:45:12	-- common/autotest_common.sh@955 -- # kill 1002362
00:14:24.536   00:45:12	-- common/autotest_common.sh@960 -- # wait 1002362
00:14:24.536  
00:14:24.536  real	0m19.405s
00:14:24.536  user	0m38.263s
00:14:24.536  sys	0m1.159s
00:14:24.536   00:45:13	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:24.536   00:45:13	-- common/autotest_common.sh@10 -- # set +x
00:14:24.536  ************************************
00:14:24.536  END TEST nvme_cuse_rpc
00:14:24.536  ************************************
00:14:24.536   00:45:13	-- cuse/nvme_cuse.sh@19 -- # run_test nvme_cli_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh
00:14:24.536   00:45:13	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:24.536   00:45:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:24.536   00:45:13	-- common/autotest_common.sh@10 -- # set +x
00:14:24.536  ************************************
00:14:24.536  START TEST nvme_cli_cuse
00:14:24.537  ************************************
00:14:24.537   00:45:13	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh
00:14:24.537  * Looking for test storage...
00:14:24.537  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:14:24.537     00:45:13	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:24.537      00:45:13	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:24.537      00:45:13	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:24.537     00:45:13	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:24.537     00:45:13	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:24.537     00:45:13	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:24.537     00:45:13	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:24.537     00:45:13	-- scripts/common.sh@335 -- # IFS=.-:
00:14:24.537     00:45:13	-- scripts/common.sh@335 -- # read -ra ver1
00:14:24.537     00:45:13	-- scripts/common.sh@336 -- # IFS=.-:
00:14:24.537     00:45:13	-- scripts/common.sh@336 -- # read -ra ver2
00:14:24.537     00:45:13	-- scripts/common.sh@337 -- # local 'op=<'
00:14:24.537     00:45:13	-- scripts/common.sh@339 -- # ver1_l=2
00:14:24.537     00:45:13	-- scripts/common.sh@340 -- # ver2_l=1
00:14:24.537     00:45:13	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:24.537     00:45:13	-- scripts/common.sh@343 -- # case "$op" in
00:14:24.537     00:45:13	-- scripts/common.sh@344 -- # : 1
00:14:24.537     00:45:13	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:24.537     00:45:13	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:24.537      00:45:13	-- scripts/common.sh@364 -- # decimal 1
00:14:24.537      00:45:13	-- scripts/common.sh@352 -- # local d=1
00:14:24.537      00:45:13	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:24.537      00:45:13	-- scripts/common.sh@354 -- # echo 1
00:14:24.537     00:45:13	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:24.537      00:45:13	-- scripts/common.sh@365 -- # decimal 2
00:14:24.537      00:45:13	-- scripts/common.sh@352 -- # local d=2
00:14:24.537      00:45:13	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:24.537      00:45:13	-- scripts/common.sh@354 -- # echo 2
00:14:24.537     00:45:13	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:24.537     00:45:13	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:24.537     00:45:13	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:24.537     00:45:13	-- scripts/common.sh@367 -- # return 0
00:14:24.537     00:45:13	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:24.537     00:45:13	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:24.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.537  		--rc genhtml_branch_coverage=1
00:14:24.537  		--rc genhtml_function_coverage=1
00:14:24.537  		--rc genhtml_legend=1
00:14:24.537  		--rc geninfo_all_blocks=1
00:14:24.537  		--rc geninfo_unexecuted_blocks=1
00:14:24.537  		
00:14:24.537  		'
00:14:24.537     00:45:13	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:24.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.537  		--rc genhtml_branch_coverage=1
00:14:24.537  		--rc genhtml_function_coverage=1
00:14:24.537  		--rc genhtml_legend=1
00:14:24.537  		--rc geninfo_all_blocks=1
00:14:24.537  		--rc geninfo_unexecuted_blocks=1
00:14:24.537  		
00:14:24.537  		'
00:14:24.537     00:45:13	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:24.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.537  		--rc genhtml_branch_coverage=1
00:14:24.537  		--rc genhtml_function_coverage=1
00:14:24.537  		--rc genhtml_legend=1
00:14:24.537  		--rc geninfo_all_blocks=1
00:14:24.537  		--rc geninfo_unexecuted_blocks=1
00:14:24.537  		
00:14:24.537  		'
00:14:24.537     00:45:13	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:24.537  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:24.537  		--rc genhtml_branch_coverage=1
00:14:24.537  		--rc genhtml_function_coverage=1
00:14:24.537  		--rc genhtml_legend=1
00:14:24.537  		--rc geninfo_all_blocks=1
00:14:24.537  		--rc geninfo_unexecuted_blocks=1
00:14:24.537  		
00:14:24.537  		'
00:14:24.537    00:45:13	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:14:24.537       00:45:13	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:14:24.537      00:45:13	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:14:24.537     00:45:13	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:14:24.537     00:45:13	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:14:24.537      00:45:13	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:24.537      00:45:13	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:24.537      00:45:13	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:24.537       00:45:13	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:24.537       00:45:13	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:24.537       00:45:13	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:24.537       00:45:13	-- paths/export.sh@5 -- # export PATH
00:14:24.537       00:45:13	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:24.537     00:45:13	-- nvme/functions.sh@10 -- # ctrls=()
00:14:24.537     00:45:13	-- nvme/functions.sh@10 -- # declare -A ctrls
00:14:24.537     00:45:13	-- nvme/functions.sh@11 -- # nvmes=()
00:14:24.537     00:45:13	-- nvme/functions.sh@11 -- # declare -A nvmes
00:14:24.537     00:45:13	-- nvme/functions.sh@12 -- # bdfs=()
00:14:24.537     00:45:13	-- nvme/functions.sh@12 -- # declare -A bdfs
00:14:24.537     00:45:13	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:14:24.537     00:45:13	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:14:24.537     00:45:13	-- nvme/functions.sh@14 -- # nvme_name=
00:14:24.537    00:45:13	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@10 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@11 -- # mkdir /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@13 -- # KERNEL_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@14 -- # CUSE_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@16 -- # NVME_CMD=/usr/local/src/nvme-cli/nvme
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@17 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:24.537   00:45:13	-- cuse/spdk_nvme_cli_cuse.sh@19 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:14:27.069  Waiting for block devices as requested
00:14:27.328  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:14:27.328  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:27.328  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:27.587  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:27.587  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:27.587  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:27.848  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:27.848  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:27.848  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:28.171  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:28.171  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:28.171  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:28.171  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:28.497  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:28.497  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:28.497  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:28.759  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:28.759   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@20 -- # scan_nvme_ctrls
00:14:28.759   00:45:17	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:14:28.759   00:45:17	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:14:28.759   00:45:17	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@49 -- # pci=0000:5e:00.0
00:14:28.759   00:45:17	-- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0
00:14:28.759   00:45:17	-- scripts/common.sh@15 -- # local i
00:14:28.759   00:45:17	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:14:28.759   00:45:17	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:14:28.759   00:45:17	-- scripts/common.sh@24 -- # return 0
00:14:28.759   00:45:17	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:14:28.759   00:45:17	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:14:28.759   00:45:17	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@18 -- # shift
00:14:28.759   00:45:17	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759    00:45:17	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[vid]=0x8086
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  BTLJ83030AK84P0DGN   ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN  "'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN  '
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  INTEL SSDPE2KX040T8                      ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8                     "'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8                     '
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  VDV10184 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[rab]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  5cd2e4 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  5 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mdts]=5
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x10200 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10200
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x989680 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0xe4e1c0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x200 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x200
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mec]=1
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[oacs]=0xe
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.759   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:28.759   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:14:28.759    00:45:17	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:14:28.759   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x18 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x18
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[lpa]=0xe
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  63 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[elpe]=63
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  353 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[cctemp]=353
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:14:28.760    00:45:17	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.760   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.760   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:14:28.760   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[nn]=128
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x6 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x6
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[fna]=0x4
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[vwc]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[sgls]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n   ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[subnqn]=
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0'
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:14:28.761   00:45:17	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:14:28.761   00:45:17	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:14:28.761   00:45:17	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:14:28.761   00:45:17	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@18 -- # shift
00:14:28.761   00:45:17	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761    00:45:17	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.761   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"'
00:14:28.761    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.761   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.761   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  01000000f76e00000000000000000000 ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000f76e00000000000000000000"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000f76e00000000000000000000
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  000000000000f76e ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="000000000000f76e"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=000000000000f76e
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0x2 (in use) ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0x2 (in use)"'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0x2 (in use)'
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:14:28.762   00:45:17	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0   lbads:12 rp:0 "'
00:14:28.762    00:45:17	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0   lbads:12 rp:0 '
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # IFS=:
00:14:28.762   00:45:17	-- nvme/functions.sh@21 -- # read -r reg val
00:14:28.762   00:45:17	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:14:28.762   00:45:17	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:14:28.762   00:45:17	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:14:28.762   00:45:17	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0
00:14:28.762   00:45:17	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:14:28.762   00:45:17	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:14:28.762    00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@22 -- # get_nvme_with_ns_management
00:14:28.762    00:45:17	-- nvme/functions.sh@153 -- # local _ctrls
00:14:28.762    00:45:17	-- nvme/functions.sh@155 -- # _ctrls=($(get_nvmes_with_ns_management))
00:14:28.762     00:45:17	-- nvme/functions.sh@155 -- # get_nvmes_with_ns_management
00:14:28.762     00:45:17	-- nvme/functions.sh@144 -- # (( 1 == 0 ))
00:14:28.762     00:45:17	-- nvme/functions.sh@146 -- # local ctrl
00:14:28.762     00:45:17	-- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}"
00:14:28.762     00:45:17	-- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt
00:14:28.762     00:45:17	-- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt
00:14:28.762     00:45:17	-- nvme/functions.sh@122 -- # local -A bits
00:14:28.762     00:45:17	-- nvme/functions.sh@125 -- # bits["ss/sr"]=1
00:14:28.762     00:45:17	-- nvme/functions.sh@126 -- # bits["fnvme"]=2
00:14:28.762     00:45:17	-- nvme/functions.sh@127 -- # bits["fc/fi"]=4
00:14:28.762     00:45:17	-- nvme/functions.sh@128 -- # bits["nsmgt"]=8
00:14:28.762     00:45:17	-- nvme/functions.sh@129 -- # bits["self-test"]=16
00:14:28.762     00:45:17	-- nvme/functions.sh@130 -- # bits["directives"]=32
00:14:28.762     00:45:17	-- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64
00:14:28.762     00:45:17	-- nvme/functions.sh@132 -- # bits["virtmgt"]=128
00:14:28.762     00:45:17	-- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256
00:14:28.762     00:45:17	-- nvme/functions.sh@134 -- # bits["getlba"]=512
00:14:28.762     00:45:17	-- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024
00:14:28.762     00:45:17	-- nvme/functions.sh@137 -- # bit=nsmgt
00:14:28.763     00:45:17	-- nvme/functions.sh@138 -- # [[ -n 8 ]]
00:14:28.763      00:45:17	-- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs
00:14:28.763      00:45:17	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs
00:14:28.763      00:45:17	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:14:28.763      00:45:17	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:14:28.763      00:45:17	-- nvme/functions.sh@75 -- # [[ -n 0xe ]]
00:14:28.763      00:45:17	-- nvme/functions.sh@76 -- # echo 0xe
00:14:28.763     00:45:17	-- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] ))
00:14:28.763     00:45:17	-- nvme/functions.sh@148 -- # echo nvme0
00:14:28.763    00:45:17	-- nvme/functions.sh@156 -- # (( 1 > 0 ))
00:14:28.763    00:45:17	-- nvme/functions.sh@157 -- # echo nvme0
00:14:28.763    00:45:17	-- nvme/functions.sh@158 -- # return 0
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@22 -- # nvme_name=nvme0
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@27 -- # sel_cmd=()
00:14:28.763    00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@29 -- # get_oncs nvme0
00:14:28.763    00:45:17	-- nvme/functions.sh@169 -- # local ctrl=nvme0
00:14:28.763    00:45:17	-- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs
00:14:28.763    00:45:17	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:14:28.763    00:45:17	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:14:28.763    00:45:17	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:14:28.763    00:45:17	-- nvme/functions.sh@75 -- # [[ -n 0x6 ]]
00:14:28.763    00:45:17	-- nvme/functions.sh@76 -- # echo 0x6
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@29 -- # (( 0x6 & 1 << 4 ))
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@33 -- # ctrlr=/dev/nvme0
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@34 -- # ns=/dev/nvme0n1
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@35 -- # bdf=0000:5e:00.0
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@37 -- # waitforblk nvme0n1
00:14:28.763   00:45:17	-- common/autotest_common.sh@1224 -- # local i=0
00:14:28.763   00:45:17	-- common/autotest_common.sh@1225 -- # lsblk -l -o NAME
00:14:28.763   00:45:17	-- common/autotest_common.sh@1225 -- # grep -q -w nvme0n1
00:14:28.763   00:45:17	-- common/autotest_common.sh@1231 -- # lsblk -l -o NAME
00:14:28.763   00:45:17	-- common/autotest_common.sh@1231 -- # grep -q -w nvme0n1
00:14:28.763   00:45:17	-- common/autotest_common.sh@1235 -- # return 0
00:14:28.763    00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:14:28.763    00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # grep oacs
00:14:28.763    00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # cut -d: -f2
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # oacs=' 0xe'
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@40 -- # oacs_firmware=4
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/nvme0n1
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@43 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@44 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/nvme0n1
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@46 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@47 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/nvme0
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@48 -- # '[' 4 -ne 0 ']'
00:14:28.763   00:45:17	-- cuse/spdk_nvme_cli_cuse.sh@49 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/nvme0
00:14:28.763   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@51 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/nvme0
00:14:28.763  Smart Log for NVME device:nvme0 namespace-id:ffffffff
00:14:28.763  critical_warning			: 0
00:14:28.763  temperature				: 37 °C (310 K)
00:14:28.763  available_spare				: 99%
00:14:28.763  available_spare_threshold		: 10%
00:14:28.763  percentage_used				: 32%
00:14:28.763  endurance group critical warning summary: 0
00:14:28.763  Data Units Read				: 631,286,601 (323.22 TB)
00:14:28.763  Data Units Written			: 792,639,254 (405.83 TB)
00:14:28.763  host_read_commands			: 37,097,247,176
00:14:28.763  host_write_commands			: 43,076,543,780
00:14:28.763  controller_busy_time			: 3,927
00:14:28.763  power_cycles				: 31
00:14:28.763  power_on_hours				: 20,880
00:14:28.763  unsafe_shutdowns			: 46
00:14:28.763  media_errors				: 0
00:14:28.763  num_err_log_entries			: 38,801
00:14:28.763  Warning Temperature Time		: 2211
00:14:28.763  Critical Composite Temperature Time	: 0
00:14:28.763  Thermal Management T1 Trans Count	: 0
00:14:28.763  Thermal Management T2 Trans Count	: 0
00:14:28.763  Thermal Management T1 Total Time	: 0
00:14:28.763  Thermal Management T2 Total Time	: 0
00:14:28.763   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@52 -- # /usr/local/src/nvme-cli/nvme error-log /dev/nvme0
00:14:28.763   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@53 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/nvme0 -f 1 -l 100
00:14:29.022   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@54 -- # /usr/local/src/nvme-cli/nvme get-log /dev/nvme0 -i 1 -l 100
00:14:29.022   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0
00:14:29.022   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@59 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/nvme0 -n 1 -f 2 -v 0
00:14:29.022   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@59 -- # true
00:14:29.022   00:45:18	-- cuse/spdk_nvme_cli_cuse.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:14:32.308  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:14:32.308  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:14:32.567  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:14:32.567  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:14:32.567  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:35.852  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:14:35.852   00:45:24	-- cuse/spdk_nvme_cli_cuse.sh@64 -- # spdk_tgt_pid=1008190
00:14:35.852   00:45:24	-- cuse/spdk_nvme_cli_cuse.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:14:35.852   00:45:24	-- cuse/spdk_nvme_cli_cuse.sh@65 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:14:35.852   00:45:24	-- cuse/spdk_nvme_cli_cuse.sh@67 -- # waitforlisten 1008190
00:14:35.852   00:45:24	-- common/autotest_common.sh@829 -- # '[' -z 1008190 ']'
00:14:35.852   00:45:24	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:35.852   00:45:24	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:35.852   00:45:24	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:35.852  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:35.852   00:45:24	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:35.852   00:45:24	-- common/autotest_common.sh@10 -- # set +x
00:14:35.852  [2024-12-17 00:45:24.895821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:35.852  [2024-12-17 00:45:24.895901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008190 ]
00:14:35.852  EAL: No free 2048 kB hugepages reported on node 1
00:14:35.852  [2024-12-17 00:45:25.005174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:35.852  [2024-12-17 00:45:25.053649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:35.852  [2024-12-17 00:45:25.053844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:35.852  [2024-12-17 00:45:25.053849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:36.110  [2024-12-17 00:45:25.221107] 'OCF_Core' volume operations registered
00:14:36.110  [2024-12-17 00:45:25.223510] 'OCF_Cache' volume operations registered
00:14:36.110  [2024-12-17 00:45:25.226389] 'OCF Composite' volume operations registered
00:14:36.110  [2024-12-17 00:45:25.228790] 'SPDK_block_device' volume operations registered
00:14:36.676   00:45:25	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:36.676   00:45:25	-- common/autotest_common.sh@862 -- # return 0
00:14:36.676   00:45:25	-- cuse/spdk_nvme_cli_cuse.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:14:39.962  Nvme0n1
00:14:39.962   00:45:28	-- cuse/spdk_nvme_cli_cuse.sh@70 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:14:39.962  [2024-12-17 00:45:29.167909] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:39.962  [2024-12-17 00:45:29.168078] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:39.963  [2024-12-17 00:45:29.168197] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:39.963   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@72 -- # ctrlr=/dev/spdk/nvme0
00:14:39.963   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@73 -- # ns=/dev/spdk/nvme0n1
00:14:39.963   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@74 -- # waitforfile /dev/spdk/nvme0n1
00:14:39.963   00:45:29	-- common/autotest_common.sh@1254 -- # local i=0
00:14:39.963   00:45:29	-- common/autotest_common.sh@1255 -- # '[' '!' -e /dev/spdk/nvme0n1 ']'
00:14:39.963   00:45:29	-- common/autotest_common.sh@1261 -- # '[' '!' -e /dev/spdk/nvme0n1 ']'
00:14:39.963   00:45:29	-- common/autotest_common.sh@1265 -- # return 0
00:14:39.963   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:14:40.221  [
00:14:40.221    {
00:14:40.221      "name": "Nvme0n1",
00:14:40.221      "aliases": [
00:14:40.221        "eb0e13b9-4aef-4085-b591-adf8ab7053f5"
00:14:40.221      ],
00:14:40.221      "product_name": "NVMe disk",
00:14:40.221      "block_size": 512,
00:14:40.221      "num_blocks": 7814037168,
00:14:40.221      "uuid": "eb0e13b9-4aef-4085-b591-adf8ab7053f5",
00:14:40.221      "assigned_rate_limits": {
00:14:40.221        "rw_ios_per_sec": 0,
00:14:40.221        "rw_mbytes_per_sec": 0,
00:14:40.221        "r_mbytes_per_sec": 0,
00:14:40.221        "w_mbytes_per_sec": 0
00:14:40.221      },
00:14:40.221      "claimed": false,
00:14:40.221      "zoned": false,
00:14:40.221      "supported_io_types": {
00:14:40.221        "read": true,
00:14:40.221        "write": true,
00:14:40.221        "unmap": true,
00:14:40.221        "write_zeroes": true,
00:14:40.221        "flush": true,
00:14:40.221        "reset": true,
00:14:40.221        "compare": false,
00:14:40.221        "compare_and_write": false,
00:14:40.221        "abort": true,
00:14:40.221        "nvme_admin": true,
00:14:40.221        "nvme_io": true
00:14:40.221      },
00:14:40.221      "driver_specific": {
00:14:40.221        "nvme": [
00:14:40.222          {
00:14:40.222            "pci_address": "0000:5e:00.0",
00:14:40.222            "trid": {
00:14:40.222              "trtype": "PCIe",
00:14:40.222              "traddr": "0000:5e:00.0"
00:14:40.222            },
00:14:40.222            "cuse_device": "spdk/nvme0n1",
00:14:40.222            "ctrlr_data": {
00:14:40.222              "cntlid": 0,
00:14:40.222              "vendor_id": "0x8086",
00:14:40.222              "model_number": "INTEL SSDPE2KX040T8",
00:14:40.222              "serial_number": "BTLJ83030AK84P0DGN",
00:14:40.222              "firmware_revision": "VDV10184",
00:14:40.222              "oacs": {
00:14:40.222                "security": 0,
00:14:40.222                "format": 1,
00:14:40.222                "firmware": 1,
00:14:40.222                "ns_manage": 1
00:14:40.222              },
00:14:40.222              "multi_ctrlr": false,
00:14:40.222              "ana_reporting": false
00:14:40.222            },
00:14:40.222            "vs": {
00:14:40.222              "nvme_version": "1.2"
00:14:40.222            },
00:14:40.222            "ns_data": {
00:14:40.222              "id": 1,
00:14:40.222              "can_share": false
00:14:40.222            }
00:14:40.222          }
00:14:40.222        ],
00:14:40.222        "mp_policy": "active_passive"
00:14:40.222      }
00:14:40.222    }
00:14:40.222  ]
00:14:40.222   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers
00:14:40.480  [
00:14:40.480    {
00:14:40.480      "name": "Nvme0",
00:14:40.480      "ctrlrs": [
00:14:40.480        {
00:14:40.480          "state": "enabled",
00:14:40.480          "cuse_device": "spdk/nvme0",
00:14:40.480          "trid": {
00:14:40.480            "trtype": "PCIe",
00:14:40.480            "traddr": "0000:5e:00.0"
00:14:40.480          },
00:14:40.481          "cntlid": 0,
00:14:40.481          "host": {
00:14:40.481            "nqn": "nqn.2014-08.org.nvmexpress:uuid:f1982854-fc69-4b58-9fed-f48fcadbef1d",
00:14:40.481            "addr": "",
00:14:40.481            "svcid": ""
00:14:40.481          }
00:14:40.481        }
00:14:40.481      ]
00:14:40.481    }
00:14:40.481  ]
00:14:40.481   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@79 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/spdk/nvme0n1
00:14:40.481   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@80 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/spdk/nvme0n1
00:14:40.481   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@81 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/spdk/nvme0n1
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@83 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/spdk/nvme0
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@84 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/spdk/nvme0
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@85 -- # '[' 4 -ne 0 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@86 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/spdk/nvme0
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@88 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/spdk/nvme0
00:14:40.740  Smart Log for NVME device:nvme0 namespace-id:ffffffff
00:14:40.740  critical_warning			: 0
00:14:40.740  temperature				: 37 °C (310 K)
00:14:40.740  available_spare				: 99%
00:14:40.740  available_spare_threshold		: 10%
00:14:40.740  percentage_used				: 32%
00:14:40.740  endurance group critical warning summary: 0
00:14:40.740  Data Units Read				: 631,286,603 (323.22 TB)
00:14:40.740  Data Units Written			: 792,639,254 (405.83 TB)
00:14:40.740  host_read_commands			: 37,097,247,231
00:14:40.740  host_write_commands			: 43,076,543,780
00:14:40.740  controller_busy_time			: 3,927
00:14:40.740  power_cycles				: 31
00:14:40.740  power_on_hours				: 20,880
00:14:40.740  unsafe_shutdowns			: 46
00:14:40.740  media_errors				: 0
00:14:40.740  num_err_log_entries			: 38,801
00:14:40.740  Warning Temperature Time		: 2211
00:14:40.740  Critical Composite Temperature Time	: 0
00:14:40.740  Thermal Management T1 Trans Count	: 0
00:14:40.740  Thermal Management T2 Trans Count	: 0
00:14:40.740  Thermal Management T1 Total Time	: 0
00:14:40.740  Thermal Management T2 Total Time	: 0
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@89 -- # /usr/local/src/nvme-cli/nvme error-log /dev/spdk/nvme0
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@90 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/spdk/nvme0 -f 1 -l 100
00:14:40.740  [2024-12-17 00:45:29.873023] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@91 -- # /usr/local/src/nvme-cli/nvme get-log /dev/spdk/nvme0 -i 1 -l 100
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@92 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0
00:14:40.740  [2024-12-17 00:45:29.915364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@93 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/spdk/nvme0 -n 1 -f 2 -v 0
00:14:40.740  [2024-12-17 00:45:29.935425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:186 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:14:40.740  [2024-12-17 00:45:29.935453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT NAMESPACE SPECIFIC (01/0f) qid:0 cid:186 cdw0:0 sqhd:000d p:1 m:0 dnr:1
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@93 -- # true
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 ']'
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6
00:14:40.740   00:45:29	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11 ']'
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@102 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files
00:14:40.998   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@105 -- # head -c512 /dev/urandom
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@106 -- # /usr/local/src/nvme-cli/nvme write /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file
00:14:40.999  write: Success
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme read /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file
00:14:40.999  read: Success
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@108 -- # cmp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@109 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@113 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 5 --cdw10=0x3ff0003 --cdw11=0x1 -r
00:14:40.999  Admin Command Create I/O Completion Queue is Success and result: 0x00000000
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@114 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 4 --cdw10=0x3
00:14:40.999  Admin Command Delete I/O Completion Queue is Success and result: 0x00000000
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@116 -- # [[ -c /dev/spdk/nvme0 ]]
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@117 -- # [[ -c /dev/spdk/nvme0n1 ]]
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@119 -- # trap - SIGINT SIGTERM EXIT
00:14:40.999   00:45:30	-- cuse/spdk_nvme_cli_cuse.sh@120 -- # killprocess 1008190
00:14:40.999   00:45:30	-- common/autotest_common.sh@936 -- # '[' -z 1008190 ']'
00:14:40.999   00:45:30	-- common/autotest_common.sh@940 -- # kill -0 1008190
00:14:40.999    00:45:30	-- common/autotest_common.sh@941 -- # uname
00:14:40.999   00:45:30	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:40.999    00:45:30	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1008190
00:14:41.257   00:45:30	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:41.257   00:45:30	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:41.257   00:45:30	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1008190'
00:14:41.257  killing process with pid 1008190
00:14:41.257   00:45:30	-- common/autotest_common.sh@955 -- # kill 1008190
00:14:41.257   00:45:30	-- common/autotest_common.sh@960 -- # wait 1008190
00:14:46.517  
00:14:46.517  real	0m21.261s
00:14:46.517  user	0m21.703s
00:14:46.517  sys	0m5.840s
00:14:46.517   00:45:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:46.517   00:45:34	-- common/autotest_common.sh@10 -- # set +x
00:14:46.517  ************************************
00:14:46.517  END TEST nvme_cli_cuse
00:14:46.517  ************************************
00:14:46.517   00:45:34	-- cuse/nvme_cuse.sh@20 -- # run_test nvme_cli_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh
00:14:46.517   00:45:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:46.517   00:45:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:46.517   00:45:34	-- common/autotest_common.sh@10 -- # set +x
00:14:46.517  ************************************
00:14:46.517  START TEST nvme_cli_plugin
00:14:46.517  ************************************
00:14:46.517   00:45:34	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh
00:14:46.517  * Looking for test storage...
00:14:46.517  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:14:46.517     00:45:34	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:46.517      00:45:34	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:46.517      00:45:34	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:46.517     00:45:34	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:46.517     00:45:34	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:46.517     00:45:34	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:46.517     00:45:34	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:46.517     00:45:34	-- scripts/common.sh@335 -- # IFS=.-:
00:14:46.517     00:45:34	-- scripts/common.sh@335 -- # read -ra ver1
00:14:46.517     00:45:34	-- scripts/common.sh@336 -- # IFS=.-:
00:14:46.517     00:45:34	-- scripts/common.sh@336 -- # read -ra ver2
00:14:46.517     00:45:34	-- scripts/common.sh@337 -- # local 'op=<'
00:14:46.517     00:45:34	-- scripts/common.sh@339 -- # ver1_l=2
00:14:46.517     00:45:34	-- scripts/common.sh@340 -- # ver2_l=1
00:14:46.517     00:45:34	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:46.517     00:45:34	-- scripts/common.sh@343 -- # case "$op" in
00:14:46.517     00:45:34	-- scripts/common.sh@344 -- # : 1
00:14:46.517     00:45:34	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:46.517     00:45:34	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:46.517      00:45:34	-- scripts/common.sh@364 -- # decimal 1
00:14:46.517      00:45:34	-- scripts/common.sh@352 -- # local d=1
00:14:46.517      00:45:34	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:46.517      00:45:34	-- scripts/common.sh@354 -- # echo 1
00:14:46.517     00:45:34	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:46.517      00:45:34	-- scripts/common.sh@365 -- # decimal 2
00:14:46.517      00:45:34	-- scripts/common.sh@352 -- # local d=2
00:14:46.517      00:45:34	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:46.517      00:45:34	-- scripts/common.sh@354 -- # echo 2
00:14:46.517     00:45:34	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:46.517     00:45:34	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:46.517     00:45:34	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:46.517     00:45:34	-- scripts/common.sh@367 -- # return 0
00:14:46.518     00:45:34	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:46.518     00:45:34	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:46.518  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.518  		--rc genhtml_branch_coverage=1
00:14:46.518  		--rc genhtml_function_coverage=1
00:14:46.518  		--rc genhtml_legend=1
00:14:46.518  		--rc geninfo_all_blocks=1
00:14:46.518  		--rc geninfo_unexecuted_blocks=1
00:14:46.518  		
00:14:46.518  		'
00:14:46.518     00:45:34	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:46.518  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.518  		--rc genhtml_branch_coverage=1
00:14:46.518  		--rc genhtml_function_coverage=1
00:14:46.518  		--rc genhtml_legend=1
00:14:46.518  		--rc geninfo_all_blocks=1
00:14:46.518  		--rc geninfo_unexecuted_blocks=1
00:14:46.518  		
00:14:46.518  		'
00:14:46.518     00:45:34	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:46.518  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.518  		--rc genhtml_branch_coverage=1
00:14:46.518  		--rc genhtml_function_coverage=1
00:14:46.518  		--rc genhtml_legend=1
00:14:46.518  		--rc geninfo_all_blocks=1
00:14:46.518  		--rc geninfo_unexecuted_blocks=1
00:14:46.518  		
00:14:46.518  		'
00:14:46.518     00:45:34	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:46.518  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.518  		--rc genhtml_branch_coverage=1
00:14:46.518  		--rc genhtml_function_coverage=1
00:14:46.518  		--rc genhtml_legend=1
00:14:46.518  		--rc geninfo_all_blocks=1
00:14:46.518  		--rc geninfo_unexecuted_blocks=1
00:14:46.518  		
00:14:46.518  		'
00:14:46.518    00:45:34	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:14:46.518       00:45:34	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:14:46.518      00:45:34	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:14:46.518     00:45:35	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:14:46.518     00:45:35	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:14:46.518      00:45:35	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:46.518      00:45:35	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:46.518      00:45:35	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:46.518       00:45:35	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:46.518       00:45:35	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:46.518       00:45:35	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:46.518       00:45:35	-- paths/export.sh@5 -- # export PATH
00:14:46.518       00:45:35	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:46.518     00:45:35	-- nvme/functions.sh@10 -- # ctrls=()
00:14:46.518     00:45:35	-- nvme/functions.sh@10 -- # declare -A ctrls
00:14:46.518     00:45:35	-- nvme/functions.sh@11 -- # nvmes=()
00:14:46.518     00:45:35	-- nvme/functions.sh@11 -- # declare -A nvmes
00:14:46.518     00:45:35	-- nvme/functions.sh@12 -- # bdfs=()
00:14:46.518     00:45:35	-- nvme/functions.sh@12 -- # declare -A bdfs
00:14:46.518     00:45:35	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:14:46.518     00:45:35	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:14:46.518     00:45:35	-- nvme/functions.sh@14 -- # nvme_name=
00:14:46.518    00:45:35	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@11 -- # trap 'killprocess $spdk_tgt_pid; "$rootdir/scripts/setup.sh" reset' EXIT
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@28 -- # kernel_out=()
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@29 -- # cuse_out=()
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@31 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@36 -- # export PCI_BLOCKED=
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@36 -- # PCI_BLOCKED=
00:14:46.518   00:45:35	-- cuse/spdk_nvme_cli_plugin.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:14:49.045  Waiting for block devices as requested
00:14:49.045  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:14:49.045  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:49.045  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:49.303  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:49.303  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:49.303  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:49.561  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:49.561  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:49.561  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:49.819  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:49.819  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:49.819  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:50.077  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:50.077  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:50.077  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:50.631  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:50.631  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:50.631   00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@39 -- # scan_nvme_ctrls
00:14:50.631   00:45:39	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:14:50.631   00:45:39	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:14:50.631   00:45:39	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@49 -- # pci=0000:5e:00.0
00:14:50.631   00:45:39	-- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0
00:14:50.631   00:45:39	-- scripts/common.sh@15 -- # local i
00:14:50.631   00:45:39	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:14:50.631   00:45:39	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:14:50.631   00:45:39	-- scripts/common.sh@24 -- # return 0
00:14:50.631   00:45:39	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:14:50.631   00:45:39	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:14:50.631   00:45:39	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@18 -- # shift
00:14:50.631   00:45:39	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631    00:45:39	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[vid]=0x8086
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  BTLJ83030AK84P0DGN   ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN  "'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN  '
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  INTEL SSDPE2KX040T8                      ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8                     "'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8                     '
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  VDV10184 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[rab]=0
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  5cd2e4 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  5 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mdts]=5
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x10200 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"'
00:14:50.631    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10200
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.631   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.631   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x989680 ]]
00:14:50.631   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0xe4e1c0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x200 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x200
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mec]=1
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[oacs]=0xe
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x18 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x18
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[lpa]=0xe
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  63 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[elpe]=63
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  353 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[cctemp]=353
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.632   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.632   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:14:50.632    00:45:39	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:14:50.632   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[nn]=128
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x6 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x6
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[fna]=0x4
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[vwc]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.633   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"'
00:14:50.633    00:45:39	-- nvme/functions.sh@23 -- # nvme0[sgls]=0
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.633   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.633   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n   ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[subnqn]=
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0'
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:14:50.634   00:45:39	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:14:50.634   00:45:39	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:14:50.634   00:45:39	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:14:50.634   00:45:39	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@18 -- # shift
00:14:50.634   00:45:39	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634    00:45:39	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.634   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.634   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:14:50.634    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:14:50.634   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  01000000f76e00000000000000000000 ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000f76e00000000000000000000"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000f76e00000000000000000000
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  000000000000f76e ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="000000000000f76e"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=000000000000f76e
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0x2 (in use) ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0x2 (in use)"'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0x2 (in use)'
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:14:50.635   00:45:39	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0   lbads:12 rp:0 "'
00:14:50.635    00:45:39	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0   lbads:12 rp:0 '
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # IFS=:
00:14:50.635   00:45:39	-- nvme/functions.sh@21 -- # read -r reg val
00:14:50.635   00:45:39	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:14:50.635   00:45:39	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:14:50.635   00:45:39	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:14:50.635   00:45:39	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0
00:14:50.635   00:45:39	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:14:50.635   00:45:39	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@41 -- # nvme list
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:50.635   00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@41 -- # kernel_out[0]='Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:14:50.635  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:14:50.635  nvme0n1          nvme0n1            BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@42 -- # nvme list -v
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:50.635   00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@42 -- # kernel_out[1]='Subsystem        Subsystem-NQN                                                                                    Controllers
00:14:50.635  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:14:50.635  nvme0     nvme0
00:14:50.635  
00:14:50.635  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:14:50.635  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:14:50.635  nvme0    BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0 nvme0n1
00:14:50.635  
00:14:50.635  Device       Generic      NSID       Usage                      Format           Controllers     
00:14:50.635  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:14:50.635  nvme0n1 nvme0n1   0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@43 -- # nvme list -v -o json
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v -o json
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:50.635   00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@43 -- # kernel_out[2]='{
00:14:50.635    "Devices":[
00:14:50.635      {
00:14:50.635        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:14:50.635        "Subsystems":[
00:14:50.635          {
00:14:50.635            "Subsystem":"nvme0",
00:14:50.635            
00:14:50.635            "Controllers":[
00:14:50.635              {
00:14:50.635                "Controller":"nvme0",
00:14:50.635                "SerialNumber":"BTLJ83030AK84P0DGN",
00:14:50.635                "ModelNumber":"INTEL SSDPE2KX040T8",
00:14:50.635                "Firmware":"VDV10184",
00:14:50.635                "Transport":"pcie",
00:14:50.635                "Address":"0000:5e:00.0",
00:14:50.635                "Namespaces":[
00:14:50.635                  {
00:14:50.635                    "NameSpace":"nvme0n1",
00:14:50.635                    "Generic":"nvme0n1",
00:14:50.635                    "NSID":1,
00:14:50.635                    "UsedBytes":4000787030016,
00:14:50.635                    "MaximumLBA":7814037168,
00:14:50.635                    "PhysicalSize":4000787030016,
00:14:50.635                    "SectorSize":512
00:14:50.635                  }
00:14:50.635                ],
00:14:50.635                "Paths":[
00:14:50.635                ]
00:14:50.635              }
00:14:50.635            ],
00:14:50.635            "Namespaces":[
00:14:50.635            ]
00:14:50.635          }
00:14:50.635        ]
00:14:50.635      }
00:14:50.635    ]
00:14:50.635  }'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@44 -- # nvme list-subsys
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list-subsys
00:14:50.635    00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:50.635   00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@44 -- # kernel_out[3]='nvme0 - 
00:14:50.635  \
00:14:50.635   +- nvme0 pcie 0000:5e:00.0 live'
00:14:50.636   00:45:39	-- cuse/spdk_nvme_cli_plugin.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:14:53.915  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:14:53.915  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:57.200  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:14:57.200   00:45:46	-- cuse/spdk_nvme_cli_plugin.sh@49 -- # spdk_tgt_pid=1012398
00:14:57.200   00:45:46	-- cuse/spdk_nvme_cli_plugin.sh@48 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:14:57.200   00:45:46	-- cuse/spdk_nvme_cli_plugin.sh@51 -- # waitforlisten 1012398
00:14:57.200   00:45:46	-- common/autotest_common.sh@829 -- # '[' -z 1012398 ']'
00:14:57.200   00:45:46	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:57.200   00:45:46	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:57.200   00:45:46	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:57.200  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:57.200   00:45:46	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:57.200   00:45:46	-- common/autotest_common.sh@10 -- # set +x
00:14:57.200  [2024-12-17 00:45:46.377633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:57.200  [2024-12-17 00:45:46.377700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012398 ]
00:14:57.200  EAL: No free 2048 kB hugepages reported on node 1
00:14:57.459  [2024-12-17 00:45:46.484189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:57.459  [2024-12-17 00:45:46.536810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:57.459  [2024-12-17 00:45:46.536968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:57.459  [2024-12-17 00:45:46.709529] 'OCF_Core' volume operations registered
00:14:57.459  [2024-12-17 00:45:46.711981] 'OCF_Cache' volume operations registered
00:14:57.459  [2024-12-17 00:45:46.714914] 'OCF Composite' volume operations registered
00:14:57.459  [2024-12-17 00:45:46.717419] 'SPDK_block_device' volume operations registered
00:14:58.394   00:45:47	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:58.394   00:45:47	-- common/autotest_common.sh@862 -- # return 0
00:14:58.394   00:45:47	-- cuse/spdk_nvme_cli_plugin.sh@54 -- # for ctrl in "${ordered_ctrls[@]}"
00:14:58.394   00:45:47	-- cuse/spdk_nvme_cli_plugin.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:5e:00.0
00:15:01.677  nvme0n1
00:15:01.677   00:45:50	-- cuse/spdk_nvme_cli_plugin.sh@56 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n nvme0
00:15:01.677  [2024-12-17 00:45:50.639506] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:15:01.677  [2024-12-17 00:45:50.639661] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:15:01.677  [2024-12-17 00:45:50.639765] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:15:01.677   00:45:50	-- cuse/spdk_nvme_cli_plugin.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:15:01.677  [
00:15:01.677    {
00:15:01.677      "name": "nvme0n1",
00:15:01.677      "aliases": [
00:15:01.677        "c608a502-f1ef-441f-bd43-9d9841104252"
00:15:01.677      ],
00:15:01.677      "product_name": "NVMe disk",
00:15:01.677      "block_size": 512,
00:15:01.677      "num_blocks": 7814037168,
00:15:01.677      "uuid": "c608a502-f1ef-441f-bd43-9d9841104252",
00:15:01.677      "assigned_rate_limits": {
00:15:01.677        "rw_ios_per_sec": 0,
00:15:01.677        "rw_mbytes_per_sec": 0,
00:15:01.677        "r_mbytes_per_sec": 0,
00:15:01.677        "w_mbytes_per_sec": 0
00:15:01.677      },
00:15:01.677      "claimed": false,
00:15:01.677      "zoned": false,
00:15:01.677      "supported_io_types": {
00:15:01.677        "read": true,
00:15:01.677        "write": true,
00:15:01.677        "unmap": true,
00:15:01.677        "write_zeroes": true,
00:15:01.677        "flush": true,
00:15:01.677        "reset": true,
00:15:01.677        "compare": false,
00:15:01.677        "compare_and_write": false,
00:15:01.677        "abort": true,
00:15:01.677        "nvme_admin": true,
00:15:01.677        "nvme_io": true
00:15:01.677      },
00:15:01.677      "driver_specific": {
00:15:01.677        "nvme": [
00:15:01.677          {
00:15:01.677            "pci_address": "0000:5e:00.0",
00:15:01.677            "trid": {
00:15:01.677              "trtype": "PCIe",
00:15:01.677              "traddr": "0000:5e:00.0"
00:15:01.677            },
00:15:01.677            "cuse_device": "spdk/nvme0n1",
00:15:01.677            "ctrlr_data": {
00:15:01.677              "cntlid": 0,
00:15:01.677              "vendor_id": "0x8086",
00:15:01.677              "model_number": "INTEL SSDPE2KX040T8",
00:15:01.677              "serial_number": "BTLJ83030AK84P0DGN",
00:15:01.677              "firmware_revision": "VDV10184",
00:15:01.677              "oacs": {
00:15:01.677                "security": 0,
00:15:01.677                "format": 1,
00:15:01.677                "firmware": 1,
00:15:01.677                "ns_manage": 1
00:15:01.677              },
00:15:01.677              "multi_ctrlr": false,
00:15:01.677              "ana_reporting": false
00:15:01.677            },
00:15:01.677            "vs": {
00:15:01.677              "nvme_version": "1.2"
00:15:01.677            },
00:15:01.677            "ns_data": {
00:15:01.677              "id": 1,
00:15:01.677              "can_share": false
00:15:01.677            }
00:15:01.677          }
00:15:01.677        ],
00:15:01.677        "mp_policy": "active_passive"
00:15:01.677      }
00:15:01.677    }
00:15:01.677  ]
00:15:01.677   00:45:50	-- cuse/spdk_nvme_cli_plugin.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers
00:15:01.935  [
00:15:01.935    {
00:15:01.935      "name": "nvme0",
00:15:01.935      "ctrlrs": [
00:15:01.935        {
00:15:01.935          "state": "enabled",
00:15:01.935          "cuse_device": "spdk/nvme0",
00:15:01.935          "trid": {
00:15:01.935            "trtype": "PCIe",
00:15:01.935            "traddr": "0000:5e:00.0"
00:15:01.935          },
00:15:01.935          "cntlid": 0,
00:15:01.935          "host": {
00:15:01.935            "nqn": "nqn.2014-08.org.nvmexpress:uuid:63f94f22-7537-400b-9e84-4a4cbe559a8e",
00:15:01.935            "addr": "",
00:15:01.935            "svcid": ""
00:15:01.935          }
00:15:01.935        }
00:15:01.935      ]
00:15:01.935    }
00:15:01.935  ]
00:15:01.935    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@63 -- # nvme spdk list
00:15:01.935    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list
00:15:01.935    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:15:02.194   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@63 -- # cuse_out[0]='Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:15:02.194  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:15:02.194  nvme0n1     nvme0n1     BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@64 -- # nvme spdk list -v
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:15:02.194   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@64 -- # cuse_out[1]='Subsystem        Subsystem-NQN                                                                                    Controllers
00:15:02.194  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:15:02.194  nvme0                                                                                                             nvme0
00:15:02.194  
00:15:02.194  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:15:02.194  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:15:02.194  nvme0 BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0        nvme0n1
00:15:02.194  
00:15:02.194  Device       Generic      NSID       Usage                      Format           Controllers     
00:15:02.194  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:15:02.194  nvme0n1 nvme0n1 0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@65 -- # nvme spdk list -v -o json
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v -o json
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:15:02.194   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@65 -- # cuse_out[2]='{
00:15:02.194    "Devices":[
00:15:02.194      {
00:15:02.194        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:15:02.194        "Subsystems":[
00:15:02.194          {
00:15:02.194            "Subsystem":"nvme0",
00:15:02.194            
00:15:02.194            "Controllers":[
00:15:02.194              {
00:15:02.194                "Controller":"nvme0",
00:15:02.194                "SerialNumber":"BTLJ83030AK84P0DGN",
00:15:02.194                "ModelNumber":"INTEL SSDPE2KX040T8",
00:15:02.194                "Firmware":"VDV10184",
00:15:02.194                "Transport":"pcie",
00:15:02.194                "Address":"0000:5e:00.0",
00:15:02.194                "Namespaces":[
00:15:02.194                  {
00:15:02.194                    "NameSpace":"nvme0n1",
00:15:02.194                    "Generic":"nvme0n1",
00:15:02.194                    "NSID":1,
00:15:02.194                    "UsedBytes":4000787030016,
00:15:02.194                    "MaximumLBA":7814037168,
00:15:02.194                    "PhysicalSize":4000787030016,
00:15:02.194                    "SectorSize":512
00:15:02.194                  }
00:15:02.194                ],
00:15:02.194                "Paths":[
00:15:02.194                ]
00:15:02.194              }
00:15:02.194            ],
00:15:02.194            "Namespaces":[
00:15:02.194            ]
00:15:02.194          }
00:15:02.194        ]
00:15:02.194      }
00:15:02.194    ]
00:15:02.194  }'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@66 -- # nvme spdk list-subsys
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:15:02.194   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@66 -- # cuse_out[3]='nvme0 - 
00:15:02.194  \
00:15:02.194   +- nvme0 pcie 0000:5e:00.0 live'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@69 -- # nvme spdk list-subsys -v -o json
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys -v -o json
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:15:02.194     00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # trap - ERR
00:15:02.194     00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # print_backtrace
00:15:02.194     00:45:51	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:15:02.194     00:45:51	-- common/autotest_common.sh@1142 -- # return 0
00:15:02.194   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@69 -- # [[ Json output format is not supported. == \J\s\o\n\ \o\u\t\p\u\t\ \f\o\r\m\a\t\ \i\s\ \n\o\t\ \s\u\p\p\o\r\t\e\d\. ]]
00:15:02.194   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@71 -- # diff -ub /dev/fd/62 /dev/fd/61
00:15:02.194    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:15:02.194  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:15:02.194  nvme0n1          nvme0n1            BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184' 'Subsystem        Subsystem-NQN                                                                                    Controllers
00:15:02.194  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:15:02.194  nvme0     nvme0
00:15:02.194  
00:15:02.194  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:15:02.194  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:15:02.194  nvme0    BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0 nvme0n1
00:15:02.194  
00:15:02.194  Device       Generic      NSID       Usage                      Format           Controllers     
00:15:02.194  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:15:02.194  nvme0n1 nvme0n1   0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0' '{
00:15:02.194    "Devices":[
00:15:02.194      {
00:15:02.194        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:15:02.194        "Subsystems":[
00:15:02.194          {
00:15:02.194            "Subsystem":"nvme0",
00:15:02.194            
00:15:02.194            "Controllers":[
00:15:02.194              {
00:15:02.195                "Controller":"nvme0",
00:15:02.195                "SerialNumber":"BTLJ83030AK84P0DGN",
00:15:02.195                "ModelNumber":"INTEL SSDPE2KX040T8",
00:15:02.195                "Firmware":"VDV10184",
00:15:02.195                "Transport":"pcie",
00:15:02.195                "Address":"0000:5e:00.0",
00:15:02.195                "Namespaces":[
00:15:02.195                  {
00:15:02.195                    "NameSpace":"nvme0n1",
00:15:02.195                    "Generic":"nvme0n1",
00:15:02.195                    "NSID":1,
00:15:02.195                    "UsedBytes":4000787030016,
00:15:02.195                    "MaximumLBA":7814037168,
00:15:02.195                    "PhysicalSize":4000787030016,
00:15:02.195                    "SectorSize":512
00:15:02.195                  }
00:15:02.195                ],
00:15:02.195                "Paths":[
00:15:02.195                ]
00:15:02.195              }
00:15:02.195            ],
00:15:02.195            "Namespaces":[
00:15:02.195            ]
00:15:02.195          }
00:15:02.195        ]
00:15:02.195      }
00:15:02.195    ]
00:15:02.195  }' 'nvme0 - 
00:15:02.195  \
00:15:02.195   +- nvme0 pcie 0000:5e:00.0 live'
00:15:02.195    00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:15:02.195  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:15:02.195  nvme0n1     nvme0n1     BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184' 'Subsystem        Subsystem-NQN                                                                                    Controllers
00:15:02.195  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:15:02.195  nvme0                                                                                                             nvme0
00:15:02.195  
00:15:02.195  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:15:02.195  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:15:02.195  nvme0 BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0        nvme0n1
00:15:02.195  
00:15:02.195  Device       Generic      NSID       Usage                      Format           Controllers     
00:15:02.195  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:15:02.195  nvme0n1 nvme0n1 0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0' '{
00:15:02.195    "Devices":[
00:15:02.195      {
00:15:02.195        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:15:02.195        "Subsystems":[
00:15:02.195          {
00:15:02.195            "Subsystem":"nvme0",
00:15:02.195            
00:15:02.195            "Controllers":[
00:15:02.195              {
00:15:02.195                "Controller":"nvme0",
00:15:02.195                "SerialNumber":"BTLJ83030AK84P0DGN",
00:15:02.195                "ModelNumber":"INTEL SSDPE2KX040T8",
00:15:02.195                "Firmware":"VDV10184",
00:15:02.195                "Transport":"pcie",
00:15:02.195                "Address":"0000:5e:00.0",
00:15:02.195                "Namespaces":[
00:15:02.195                  {
00:15:02.195                    "NameSpace":"nvme0n1",
00:15:02.195                    "Generic":"nvme0n1",
00:15:02.195                    "NSID":1,
00:15:02.195                    "UsedBytes":4000787030016,
00:15:02.195                    "MaximumLBA":7814037168,
00:15:02.195                    "PhysicalSize":4000787030016,
00:15:02.195                    "SectorSize":512
00:15:02.195                  }
00:15:02.195                ],
00:15:02.195                "Paths":[
00:15:02.195                ]
00:15:02.195              }
00:15:02.195            ],
00:15:02.195            "Namespaces":[
00:15:02.195            ]
00:15:02.195          }
00:15:02.195        ]
00:15:02.195      }
00:15:02.195    ]
00:15:02.195  }' 'nvme0 - 
00:15:02.195  \
00:15:02.195   +- nvme0 pcie 0000:5e:00.0 live'
00:15:02.195   00:45:51	-- cuse/spdk_nvme_cli_plugin.sh@1 -- # killprocess 1012398
00:15:02.195   00:45:51	-- common/autotest_common.sh@936 -- # '[' -z 1012398 ']'
00:15:02.195   00:45:51	-- common/autotest_common.sh@940 -- # kill -0 1012398
00:15:02.195    00:45:51	-- common/autotest_common.sh@941 -- # uname
00:15:02.195   00:45:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:02.195    00:45:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1012398
00:15:02.195   00:45:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:02.195   00:45:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:02.195   00:45:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1012398'
00:15:02.195  killing process with pid 1012398
00:15:02.195   00:45:51	-- common/autotest_common.sh@955 -- # kill 1012398
00:15:02.195   00:45:51	-- common/autotest_common.sh@960 -- # wait 1012398
00:15:07.459   00:45:56	-- cuse/spdk_nvme_cli_plugin.sh@1 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:15:09.988  Waiting for block devices as requested
00:15:10.245  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:15:10.245  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:15:10.503  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:15:10.503  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:15:10.503  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:15:10.762  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:15:10.762  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:15:10.762  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:15:11.020  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:15:11.020  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:15:11.020  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:15:11.279  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:15:11.279  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:15:11.279  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:15:11.538  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:15:11.538  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:15:11.538  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:15:11.538  
00:15:11.538  real	0m25.974s
00:15:11.538  user	0m13.327s
00:15:11.538  sys	0m8.231s
00:15:11.538   00:46:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:11.538   00:46:00	-- common/autotest_common.sh@10 -- # set +x
00:15:11.538  ************************************
00:15:11.538  END TEST nvme_cli_plugin
00:15:11.538  ************************************
00:15:11.797   00:46:00	-- cuse/nvme_cuse.sh@21 -- # run_test nvme_smartctl_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh
00:15:11.797   00:46:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:15:11.797   00:46:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:11.797   00:46:00	-- common/autotest_common.sh@10 -- # set +x
00:15:11.797  ************************************
00:15:11.797  START TEST nvme_smartctl_cuse
00:15:11.797  ************************************
00:15:11.797   00:46:00	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh
00:15:11.797  * Looking for test storage...
00:15:11.797  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:15:11.797    00:46:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:11.797     00:46:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:11.797     00:46:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:11.797    00:46:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:11.797    00:46:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:11.797    00:46:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:11.797    00:46:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:11.797    00:46:01	-- scripts/common.sh@335 -- # IFS=.-:
00:15:11.797    00:46:01	-- scripts/common.sh@335 -- # read -ra ver1
00:15:11.797    00:46:01	-- scripts/common.sh@336 -- # IFS=.-:
00:15:11.797    00:46:01	-- scripts/common.sh@336 -- # read -ra ver2
00:15:11.797    00:46:01	-- scripts/common.sh@337 -- # local 'op=<'
00:15:11.797    00:46:01	-- scripts/common.sh@339 -- # ver1_l=2
00:15:11.797    00:46:01	-- scripts/common.sh@340 -- # ver2_l=1
00:15:11.797    00:46:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:11.797    00:46:01	-- scripts/common.sh@343 -- # case "$op" in
00:15:11.797    00:46:01	-- scripts/common.sh@344 -- # : 1
00:15:11.797    00:46:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:11.797    00:46:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:11.797     00:46:01	-- scripts/common.sh@364 -- # decimal 1
00:15:11.797     00:46:01	-- scripts/common.sh@352 -- # local d=1
00:15:11.797     00:46:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:11.797     00:46:01	-- scripts/common.sh@354 -- # echo 1
00:15:11.797    00:46:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:11.797     00:46:01	-- scripts/common.sh@365 -- # decimal 2
00:15:11.797     00:46:01	-- scripts/common.sh@352 -- # local d=2
00:15:11.797     00:46:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:11.797     00:46:01	-- scripts/common.sh@354 -- # echo 2
00:15:11.797    00:46:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:11.797    00:46:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:11.797    00:46:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:11.797    00:46:01	-- scripts/common.sh@367 -- # return 0
00:15:11.797    00:46:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:11.797    00:46:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:11.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.797  		--rc genhtml_branch_coverage=1
00:15:11.797  		--rc genhtml_function_coverage=1
00:15:11.797  		--rc genhtml_legend=1
00:15:11.797  		--rc geninfo_all_blocks=1
00:15:11.797  		--rc geninfo_unexecuted_blocks=1
00:15:11.797  		
00:15:11.797  		'
00:15:11.797    00:46:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:11.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.797  		--rc genhtml_branch_coverage=1
00:15:11.797  		--rc genhtml_function_coverage=1
00:15:11.797  		--rc genhtml_legend=1
00:15:11.797  		--rc geninfo_all_blocks=1
00:15:11.797  		--rc geninfo_unexecuted_blocks=1
00:15:11.797  		
00:15:11.797  		'
00:15:11.797    00:46:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:11.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.797  		--rc genhtml_branch_coverage=1
00:15:11.797  		--rc genhtml_function_coverage=1
00:15:11.797  		--rc genhtml_legend=1
00:15:11.797  		--rc geninfo_all_blocks=1
00:15:11.797  		--rc geninfo_unexecuted_blocks=1
00:15:11.797  		
00:15:11.797  		'
00:15:11.797    00:46:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:11.797  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:11.797  		--rc genhtml_branch_coverage=1
00:15:11.797  		--rc genhtml_function_coverage=1
00:15:11.797  		--rc genhtml_legend=1
00:15:11.797  		--rc geninfo_all_blocks=1
00:15:11.797  		--rc geninfo_unexecuted_blocks=1
00:15:11.797  		
00:15:11.797  		'
00:15:11.797   00:46:01	-- cuse/spdk_smartctl_cuse.sh@11 -- # SMARTCTL_CMD='smartctl -d nvme'
00:15:11.797   00:46:01	-- cuse/spdk_smartctl_cuse.sh@12 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:15:11.797   00:46:01	-- cuse/spdk_smartctl_cuse.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:15:15.079  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:15:15.079  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:15:18.369  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:15:18.369    00:46:07	-- cuse/spdk_smartctl_cuse.sh@16 -- # get_first_nvme_bdf
00:15:18.369    00:46:07	-- common/autotest_common.sh@1519 -- # bdfs=()
00:15:18.369    00:46:07	-- common/autotest_common.sh@1519 -- # local bdfs
00:15:18.369    00:46:07	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:15:18.369     00:46:07	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:15:18.369     00:46:07	-- common/autotest_common.sh@1508 -- # bdfs=()
00:15:18.369     00:46:07	-- common/autotest_common.sh@1508 -- # local bdfs
00:15:18.370     00:46:07	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:15:18.370      00:46:07	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:15:18.370      00:46:07	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:15:18.370     00:46:07	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:15:18.370     00:46:07	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:15:18.370    00:46:07	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:15:18.370   00:46:07	-- cuse/spdk_smartctl_cuse.sh@16 -- # bdf=0000:5e:00.0
00:15:18.370   00:46:07	-- cuse/spdk_smartctl_cuse.sh@18 -- # PCI_ALLOWED=0000:5e:00.0
00:15:18.370   00:46:07	-- cuse/spdk_smartctl_cuse.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:15:20.902  0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0
00:15:20.902  0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1
00:15:20.902  0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2
00:15:20.902  0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3
00:15:20.902  0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4
00:15:20.902  0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5
00:15:20.902  0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6
00:15:20.902  0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7
00:15:20.902  0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0
00:15:20.902  0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1
00:15:20.902  0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2
00:15:20.902  0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3
00:15:20.902  0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4
00:15:20.902  0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5
00:15:20.902  0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6
00:15:20.902  0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7
00:15:20.902  Waiting for block devices as requested
00:15:20.902  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:15:20.902    00:46:10	-- cuse/spdk_smartctl_cuse.sh@19 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0
00:15:21.162     00:46:10	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0
00:15:21.162     00:46:10	-- common/autotest_common.sh@1497 -- # grep 0000:5e:00.0/nvme/nvme
00:15:21.162    00:46:10	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:15:21.162    00:46:10	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]]
00:15:21.162     00:46:10	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:15:21.162    00:46:10	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:15:21.162   00:46:10	-- cuse/spdk_smartctl_cuse.sh@19 -- # nvme_name=nvme0
00:15:21.162   00:46:10	-- cuse/spdk_smartctl_cuse.sh@20 -- # [[ -z nvme0 ]]
00:15:21.162    00:46:10	-- cuse/spdk_smartctl_cuse.sh@25 -- # smartctl -d nvme --json=g -a /dev/nvme0
00:15:21.162    00:46:10	-- cuse/spdk_smartctl_cuse.sh@25 -- # grep -v /dev/nvme0
00:15:21.162    00:46:10	-- cuse/spdk_smartctl_cuse.sh@25 -- # sort
00:15:21.162   00:46:10	-- cuse/spdk_smartctl_cuse.sh@25 -- # KERNEL_SMART_JSON='json = {};
00:15:21.162  json.device = {};
00:15:21.162  json.device.protocol = "NVMe";
00:15:21.162  json.device.type = "nvme";
00:15:21.162  json.firmware_version = "VDV10184";
00:15:21.162  json.json_format_version = [];
00:15:21.162  json.json_format_version[0] = 1;
00:15:21.162  json.json_format_version[1] = 0;
00:15:21.162  json.local_time = {};
00:15:21.162  json.local_time.asctime = "Tue Dec 17 00:46:10 2024 CET";
00:15:21.162  json.local_time.time_t = 1734392770;
00:15:21.162  json.model_name = "INTEL SSDPE2KX040T8";
00:15:21.162  json.nvme_controller_id = 0;
00:15:21.162  json.nvme_error_information_log = {};
00:15:21.162  json.nvme_error_information_log.read = 16;
00:15:21.162  json.nvme_error_information_log.size = 64;
00:15:21.162  json.nvme_error_information_log.table = [];
00:15:21.162  json.nvme_error_information_log.table[0] = {};
00:15:21.162  json.nvme_error_information_log.table[0].error_count = 38801;
00:15:21.162  json.nvme_error_information_log.table[0].lba = {};
00:15:21.162  json.nvme_error_information_log.table[0].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[0].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[0].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:15:21.162  json.nvme_error_information_log.table[1] = {};
00:15:21.162  json.nvme_error_information_log.table[10] = {};
00:15:21.162  json.nvme_error_information_log.table[10].error_count = 38791;
00:15:21.162  json.nvme_error_information_log.table[10].lba = {};
00:15:21.162  json.nvme_error_information_log.table[10].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[10].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[10].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:15:21.162  json.nvme_error_information_log.table[11] = {};
00:15:21.162  json.nvme_error_information_log.table[11].error_count = 38790;
00:15:21.162  json.nvme_error_information_log.table[11].lba = {};
00:15:21.162  json.nvme_error_information_log.table[11].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[11].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[11].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:15:21.162  json.nvme_error_information_log.table[12] = {};
00:15:21.162  json.nvme_error_information_log.table[12].error_count = 38789;
00:15:21.162  json.nvme_error_information_log.table[12].lba = {};
00:15:21.162  json.nvme_error_information_log.table[12].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[12].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[12].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:15:21.162  json.nvme_error_information_log.table[13] = {};
00:15:21.162  json.nvme_error_information_log.table[13].error_count = 38788;
00:15:21.162  json.nvme_error_information_log.table[13].lba = {};
00:15:21.162  json.nvme_error_information_log.table[13].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[13].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[13].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:15:21.162  json.nvme_error_information_log.table[14] = {};
00:15:21.162  json.nvme_error_information_log.table[14].error_count = 38787;
00:15:21.162  json.nvme_error_information_log.table[14].lba = {};
00:15:21.162  json.nvme_error_information_log.table[14].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[14].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[14].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:15:21.162  json.nvme_error_information_log.table[15] = {};
00:15:21.162  json.nvme_error_information_log.table[15].error_count = 38786;
00:15:21.162  json.nvme_error_information_log.table[15].lba = {};
00:15:21.162  json.nvme_error_information_log.table[15].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[15].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[15].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:15:21.162  json.nvme_error_information_log.table[1].error_count = 38800;
00:15:21.162  json.nvme_error_information_log.table[1].lba = {};
00:15:21.162  json.nvme_error_information_log.table[1].lba.value = 0;
00:15:21.162  json.nvme_error_information_log.table[1].phase_tag = false;
00:15:21.162  json.nvme_error_information_log.table[1].status_field = {};
00:15:21.162  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:15:21.162  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:15:21.162  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:15:21.162  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:15:21.162  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:15:21.162  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:15:21.163  json.nvme_error_information_log.table[2] = {};
00:15:21.163  json.nvme_error_information_log.table[2].error_count = 38799;
00:15:21.163  json.nvme_error_information_log.table[2].lba = {};
00:15:21.163  json.nvme_error_information_log.table[2].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[2].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[2].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:15:21.163  json.nvme_error_information_log.table[3] = {};
00:15:21.163  json.nvme_error_information_log.table[3].error_count = 38798;
00:15:21.163  json.nvme_error_information_log.table[3].lba = {};
00:15:21.163  json.nvme_error_information_log.table[3].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[3].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[3].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:15:21.163  json.nvme_error_information_log.table[4] = {};
00:15:21.163  json.nvme_error_information_log.table[4].error_count = 38797;
00:15:21.163  json.nvme_error_information_log.table[4].lba = {};
00:15:21.163  json.nvme_error_information_log.table[4].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[4].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[4].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:15:21.163  json.nvme_error_information_log.table[5] = {};
00:15:21.163  json.nvme_error_information_log.table[5].error_count = 38796;
00:15:21.163  json.nvme_error_information_log.table[5].lba = {};
00:15:21.163  json.nvme_error_information_log.table[5].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[5].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[5].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:15:21.163  json.nvme_error_information_log.table[6] = {};
00:15:21.163  json.nvme_error_information_log.table[6].error_count = 38795;
00:15:21.163  json.nvme_error_information_log.table[6].lba = {};
00:15:21.163  json.nvme_error_information_log.table[6].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[6].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[6].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:15:21.163  json.nvme_error_information_log.table[7] = {};
00:15:21.163  json.nvme_error_information_log.table[7].error_count = 38794;
00:15:21.163  json.nvme_error_information_log.table[7].lba = {};
00:15:21.163  json.nvme_error_information_log.table[7].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[7].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[7].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:15:21.163  json.nvme_error_information_log.table[8] = {};
00:15:21.163  json.nvme_error_information_log.table[8].error_count = 38793;
00:15:21.163  json.nvme_error_information_log.table[8].lba = {};
00:15:21.163  json.nvme_error_information_log.table[8].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[8].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[8].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:15:21.163  json.nvme_error_information_log.table[9] = {};
00:15:21.163  json.nvme_error_information_log.table[9].error_count = 38792;
00:15:21.163  json.nvme_error_information_log.table[9].lba = {};
00:15:21.163  json.nvme_error_information_log.table[9].lba.value = 0;
00:15:21.163  json.nvme_error_information_log.table[9].phase_tag = false;
00:15:21.163  json.nvme_error_information_log.table[9].status_field = {};
00:15:21.163  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:15:21.163  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:15:21.163  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:15:21.163  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:15:21.163  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:15:21.163  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:15:21.163  json.nvme_error_information_log.unread = 48;
00:15:21.163  json.nvme_ieee_oui_identifier = 6083300;
00:15:21.163  json.nvme_number_of_namespaces = 128;
00:15:21.163  json.nvme_pci_vendor = {};
00:15:21.163  json.nvme_pci_vendor.id = 32902;
00:15:21.163  json.nvme_pci_vendor.subsystem_id = 32902;
00:15:21.163  json.nvme_smart_health_information_log = {};
00:15:21.163  json.nvme_smart_health_information_log.available_spare = 99;
00:15:21.163  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:15:21.163  json.nvme_smart_health_information_log.controller_busy_time = 3927;
00:15:21.163  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:15:21.163  json.nvme_smart_health_information_log.critical_warning = 0;
00:15:21.163  json.nvme_smart_health_information_log.data_units_read = 631286614;
00:15:21.163  json.nvme_smart_health_information_log.data_units_written = 792639254;
00:15:21.163  json.nvme_smart_health_information_log.host_reads = 37097247491;
00:15:21.163  json.nvme_smart_health_information_log.host_writes = 43076543781;
00:15:21.163  json.nvme_smart_health_information_log.media_errors = 0;
00:15:21.163  json.nvme_smart_health_information_log.num_err_log_entries = 38801;
00:15:21.163  json.nvme_smart_health_information_log.percentage_used = 32;
00:15:21.163  json.nvme_smart_health_information_log.power_cycles = 31;
00:15:21.163  json.nvme_smart_health_information_log.power_on_hours = 20880;
00:15:21.163  json.nvme_smart_health_information_log.temperature = 37;
00:15:21.163  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:15:21.163  json.nvme_smart_health_information_log.warning_temp_time = 2211;
00:15:21.163  json.nvme_total_capacity = 4000787030016;
00:15:21.163  json.nvme_unallocated_capacity = 0;
00:15:21.163  json.nvme_version = {};
00:15:21.163  json.nvme_version.string = "1.2";
00:15:21.163  json.nvme_version.value = 66048;
00:15:21.163  json.power_cycle_count = 31;
00:15:21.163  json.power_on_time = {};
00:15:21.163  json.power_on_time.hours = 20880;
00:15:21.163  json.serial_number = "BTLJ83030AK84P0DGN";
00:15:21.163  json.smartctl = {};
00:15:21.163  json.smartctl.argv = [];
00:15:21.163  json.smartctl.argv[0] = "smartctl";
00:15:21.163  json.smartctl.argv[1] = "-d";
00:15:21.163  json.smartctl.argv[2] = "nvme";
00:15:21.163  json.smartctl.argv[3] = "--json=g";
00:15:21.163  json.smartctl.argv[4] = "-a";
00:15:21.163  json.smartctl.build_info = "(local build)";
00:15:21.163  json.smartctl.exit_status = 0;
00:15:21.163  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:15:21.163  json.smartctl.pre_release = false;
00:15:21.163  json.smartctl.svn_revision = "5530";
00:15:21.163  json.smartctl.version = [];
00:15:21.163  json.smartctl.version[0] = 7;
00:15:21.163  json.smartctl.version[1] = 4;
00:15:21.163  json.smart_status = {};
00:15:21.163  json.smart_status.nvme = {};
00:15:21.163  json.smart_status.nvme.value = 0;
00:15:21.163  json.smart_status.passed = true;
00:15:21.163  json.smart_support = {};
00:15:21.163  json.smart_support.available = true;
00:15:21.163  json.smart_support.enabled = true;
00:15:21.163  json.temperature = {};
00:15:21.163  json.temperature.current = 37;'
00:15:21.163   00:46:10	-- cuse/spdk_smartctl_cuse.sh@27 -- # smartctl -d nvme -i /dev/nvme0n1
00:15:21.163  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:21.163  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:21.163  
00:15:21.163  === START OF INFORMATION SECTION ===
00:15:21.163  Model Number:                       INTEL SSDPE2KX040T8
00:15:21.163  Serial Number:                      BTLJ83030AK84P0DGN
00:15:21.163  Firmware Version:                   VDV10184
00:15:21.163  PCI Vendor/Subsystem ID:            0x8086
00:15:21.163  IEEE OUI Identifier:                0x5cd2e4
00:15:21.163  Total NVM Capacity:                 4,000,787,030,016 [4.00 TB]
00:15:21.163  Unallocated NVM Capacity:           0
00:15:21.163  Controller ID:                      0
00:15:21.163  NVMe Version:                       1.2
00:15:21.163  Number of Namespaces:               128
00:15:21.163  Namespace 1 Size/Capacity:          4,000,787,030,016 [4.00 TB]
00:15:21.163  Namespace 1 Formatted LBA Size:     512
00:15:21.163  Namespace 1 IEEE EUI-64:            000000 000000f76e
00:15:21.163  Local Time is:                      Tue Dec 17 00:46:10 2024 CET
00:15:21.163  
00:15:21.163    00:46:10	-- cuse/spdk_smartctl_cuse.sh@30 -- # smartctl -d nvme -l error /dev/nvme0
00:15:21.164   00:46:10	-- cuse/spdk_smartctl_cuse.sh@30 -- # KERNEL_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:21.164  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:21.164  
00:15:21.164  === START OF SMART DATA SECTION ===
00:15:21.164  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:15:21.164  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:15:21.164    0      38801     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    1      38800     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    2      38799     0       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    3      38798     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    4      38797     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    5      38796     0       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    6      38795     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    7      38794     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    8      38793     0       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164    9      38792     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164   10      38791     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164   11      38790     0       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164   12      38789     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164   13      38788     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164   14      38787     0       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164   15      38786     2       -  0xc00c      -            0     -     -  Internal Error
00:15:21.164  ... (48 entries not read)'
00:15:21.164   00:46:10	-- cuse/spdk_smartctl_cuse.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:15:24.450  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:15:24.450  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:15:27.734  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:15:27.992   00:46:16	-- cuse/spdk_smartctl_cuse.sh@35 -- # spdk_tgt_pid=1018813
00:15:27.992   00:46:16	-- cuse/spdk_smartctl_cuse.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:15:27.992   00:46:16	-- cuse/spdk_smartctl_cuse.sh@36 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:15:27.992   00:46:16	-- cuse/spdk_smartctl_cuse.sh@38 -- # waitforlisten 1018813
00:15:27.992   00:46:16	-- common/autotest_common.sh@829 -- # '[' -z 1018813 ']'
00:15:27.992   00:46:16	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:27.992   00:46:16	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:27.992   00:46:16	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:27.992  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:27.992   00:46:16	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:27.992   00:46:16	-- common/autotest_common.sh@10 -- # set +x
00:15:27.992  [2024-12-17 00:46:17.050731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:27.992  [2024-12-17 00:46:17.050797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018813 ]
00:15:27.992  EAL: No free 2048 kB hugepages reported on node 1
00:15:27.992  [2024-12-17 00:46:17.158394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:27.992  [2024-12-17 00:46:17.207519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:27.992  [2024-12-17 00:46:17.207724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:27.992  [2024-12-17 00:46:17.207728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:28.250  [2024-12-17 00:46:17.370006] 'OCF_Core' volume operations registered
00:15:28.250  [2024-12-17 00:46:17.372408] 'OCF_Cache' volume operations registered
00:15:28.250  [2024-12-17 00:46:17.375252] 'OCF Composite' volume operations registered
00:15:28.250  [2024-12-17 00:46:17.377732] 'SPDK_block_device' volume operations registered
00:15:28.844   00:46:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:28.844   00:46:17	-- common/autotest_common.sh@862 -- # return 0
00:15:28.844   00:46:17	-- cuse/spdk_smartctl_cuse.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:15:32.192  Nvme0n1
00:15:32.192   00:46:20	-- cuse/spdk_smartctl_cuse.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:15:32.192  [2024-12-17 00:46:21.115749] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:15:32.192  [2024-12-17 00:46:21.115939] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:15:32.192  [2024-12-17 00:46:21.116058] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:15:32.192   00:46:21	-- cuse/spdk_smartctl_cuse.sh@43 -- # sleep 5
00:15:37.466   00:46:26	-- cuse/spdk_smartctl_cuse.sh@45 -- # '[' '!' -c /dev/spdk/nvme0 ']'
00:15:37.466    00:46:26	-- cuse/spdk_smartctl_cuse.sh@49 -- # smartctl -d nvme --json=g -a /dev/spdk/nvme0
00:15:37.466    00:46:26	-- cuse/spdk_smartctl_cuse.sh@49 -- # grep -v /dev/spdk/nvme0
00:15:37.466    00:46:26	-- cuse/spdk_smartctl_cuse.sh@49 -- # sort
00:15:37.466  [2024-12-17 00:46:26.162657] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:15:37.466   00:46:26	-- cuse/spdk_smartctl_cuse.sh@49 -- # CUSE_SMART_JSON='json = {};
00:15:37.466  json.device = {};
00:15:37.466  json.device.protocol = "NVMe";
00:15:37.466  json.device.type = "nvme";
00:15:37.466  json.firmware_version = "VDV10184";
00:15:37.466  json.json_format_version = [];
00:15:37.466  json.json_format_version[0] = 1;
00:15:37.466  json.json_format_version[1] = 0;
00:15:37.466  json.local_time = {};
00:15:37.466  json.local_time.asctime = "Tue Dec 17 00:46:26 2024 CET";
00:15:37.466  json.local_time.time_t = 1734392786;
00:15:37.466  json.model_name = "INTEL SSDPE2KX040T8";
00:15:37.466  json.nvme_controller_id = 0;
00:15:37.466  json.nvme_error_information_log = {};
00:15:37.466  json.nvme_error_information_log.read = 16;
00:15:37.466  json.nvme_error_information_log.size = 64;
00:15:37.466  json.nvme_error_information_log.table = [];
00:15:37.466  json.nvme_error_information_log.table[0] = {};
00:15:37.466  json.nvme_error_information_log.table[0].error_count = 38801;
00:15:37.466  json.nvme_error_information_log.table[0].lba = {};
00:15:37.467  json.nvme_error_information_log.table[0].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[0].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[0].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:15:37.467  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:15:37.467  json.nvme_error_information_log.table[1] = {};
00:15:37.467  json.nvme_error_information_log.table[10] = {};
00:15:37.467  json.nvme_error_information_log.table[10].error_count = 38791;
00:15:37.467  json.nvme_error_information_log.table[10].lba = {};
00:15:37.467  json.nvme_error_information_log.table[10].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[10].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[10].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:15:37.467  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:15:37.467  json.nvme_error_information_log.table[11] = {};
00:15:37.467  json.nvme_error_information_log.table[11].error_count = 38790;
00:15:37.467  json.nvme_error_information_log.table[11].lba = {};
00:15:37.467  json.nvme_error_information_log.table[11].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[11].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[11].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:15:37.467  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:15:37.467  json.nvme_error_information_log.table[12] = {};
00:15:37.467  json.nvme_error_information_log.table[12].error_count = 38789;
00:15:37.467  json.nvme_error_information_log.table[12].lba = {};
00:15:37.467  json.nvme_error_information_log.table[12].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[12].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[12].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:15:37.467  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:15:37.467  json.nvme_error_information_log.table[13] = {};
00:15:37.467  json.nvme_error_information_log.table[13].error_count = 38788;
00:15:37.467  json.nvme_error_information_log.table[13].lba = {};
00:15:37.467  json.nvme_error_information_log.table[13].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[13].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[13].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:15:37.467  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:15:37.467  json.nvme_error_information_log.table[14] = {};
00:15:37.467  json.nvme_error_information_log.table[14].error_count = 38787;
00:15:37.467  json.nvme_error_information_log.table[14].lba = {};
00:15:37.467  json.nvme_error_information_log.table[14].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[14].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[14].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:15:37.467  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:15:37.467  json.nvme_error_information_log.table[15] = {};
00:15:37.467  json.nvme_error_information_log.table[15].error_count = 38786;
00:15:37.467  json.nvme_error_information_log.table[15].lba = {};
00:15:37.467  json.nvme_error_information_log.table[15].lba.value = 0;
00:15:37.467  json.nvme_error_information_log.table[15].phase_tag = false;
00:15:37.467  json.nvme_error_information_log.table[15].status_field = {};
00:15:37.467  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:15:37.467  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:15:37.467  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:15:37.467  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:15:37.467  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:15:37.468  json.nvme_error_information_log.table[1].error_count = 38800;
00:15:37.468  json.nvme_error_information_log.table[1].lba = {};
00:15:37.468  json.nvme_error_information_log.table[1].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[1].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[1].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:15:37.468  json.nvme_error_information_log.table[2] = {};
00:15:37.468  json.nvme_error_information_log.table[2].error_count = 38799;
00:15:37.468  json.nvme_error_information_log.table[2].lba = {};
00:15:37.468  json.nvme_error_information_log.table[2].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[2].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[2].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:15:37.468  json.nvme_error_information_log.table[3] = {};
00:15:37.468  json.nvme_error_information_log.table[3].error_count = 38798;
00:15:37.468  json.nvme_error_information_log.table[3].lba = {};
00:15:37.468  json.nvme_error_information_log.table[3].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[3].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[3].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:15:37.468  json.nvme_error_information_log.table[4] = {};
00:15:37.468  json.nvme_error_information_log.table[4].error_count = 38797;
00:15:37.468  json.nvme_error_information_log.table[4].lba = {};
00:15:37.468  json.nvme_error_information_log.table[4].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[4].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[4].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:15:37.468  json.nvme_error_information_log.table[5] = {};
00:15:37.468  json.nvme_error_information_log.table[5].error_count = 38796;
00:15:37.468  json.nvme_error_information_log.table[5].lba = {};
00:15:37.468  json.nvme_error_information_log.table[5].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[5].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[5].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:15:37.468  json.nvme_error_information_log.table[6] = {};
00:15:37.468  json.nvme_error_information_log.table[6].error_count = 38795;
00:15:37.468  json.nvme_error_information_log.table[6].lba = {};
00:15:37.468  json.nvme_error_information_log.table[6].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[6].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[6].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:15:37.468  json.nvme_error_information_log.table[7] = {};
00:15:37.468  json.nvme_error_information_log.table[7].error_count = 38794;
00:15:37.468  json.nvme_error_information_log.table[7].lba = {};
00:15:37.468  json.nvme_error_information_log.table[7].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[7].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[7].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:15:37.468  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:15:37.468  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:15:37.468  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:15:37.468  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:15:37.468  json.nvme_error_information_log.table[8] = {};
00:15:37.468  json.nvme_error_information_log.table[8].error_count = 38793;
00:15:37.468  json.nvme_error_information_log.table[8].lba = {};
00:15:37.468  json.nvme_error_information_log.table[8].lba.value = 0;
00:15:37.468  json.nvme_error_information_log.table[8].phase_tag = false;
00:15:37.468  json.nvme_error_information_log.table[8].status_field = {};
00:15:37.468  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:15:37.468  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:15:37.469  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:15:37.469  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:15:37.469  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:15:37.469  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:15:37.469  json.nvme_error_information_log.table[9] = {};
00:15:37.469  json.nvme_error_information_log.table[9].error_count = 38792;
00:15:37.469  json.nvme_error_information_log.table[9].lba = {};
00:15:37.469  json.nvme_error_information_log.table[9].lba.value = 0;
00:15:37.469  json.nvme_error_information_log.table[9].phase_tag = false;
00:15:37.469  json.nvme_error_information_log.table[9].status_field = {};
00:15:37.469  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:15:37.469  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:15:37.469  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:15:37.469  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:15:37.469  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:15:37.469  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:15:37.469  json.nvme_error_information_log.unread = 48;
00:15:37.469  json.nvme_ieee_oui_identifier = 6083300;
00:15:37.469  json.nvme_number_of_namespaces = 128;
00:15:37.469  json.nvme_pci_vendor = {};
00:15:37.469  json.nvme_pci_vendor.id = 32902;
00:15:37.469  json.nvme_pci_vendor.subsystem_id = 32902;
00:15:37.469  json.nvme_smart_health_information_log = {};
00:15:37.469  json.nvme_smart_health_information_log.available_spare = 99;
00:15:37.469  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:15:37.469  json.nvme_smart_health_information_log.controller_busy_time = 3927;
00:15:37.469  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:15:37.469  json.nvme_smart_health_information_log.critical_warning = 0;
00:15:37.469  json.nvme_smart_health_information_log.data_units_read = 631286616;
00:15:37.469  json.nvme_smart_health_information_log.data_units_written = 792639254;
00:15:37.469  json.nvme_smart_health_information_log.host_reads = 37097247546;
00:15:37.469  json.nvme_smart_health_information_log.host_writes = 43076543781;
00:15:37.469  json.nvme_smart_health_information_log.media_errors = 0;
00:15:37.469  json.nvme_smart_health_information_log.num_err_log_entries = 38801;
00:15:37.469  json.nvme_smart_health_information_log.percentage_used = 32;
00:15:37.469  json.nvme_smart_health_information_log.power_cycles = 31;
00:15:37.469  json.nvme_smart_health_information_log.power_on_hours = 20880;
00:15:37.469  json.nvme_smart_health_information_log.temperature = 37;
00:15:37.469  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:15:37.469  json.nvme_smart_health_information_log.warning_temp_time = 2211;
00:15:37.469  json.nvme_total_capacity = 4000787030016;
00:15:37.469  json.nvme_unallocated_capacity = 0;
00:15:37.469  json.nvme_version = {};
00:15:37.469  json.nvme_version.string = "1.2";
00:15:37.469  json.nvme_version.value = 66048;
00:15:37.469  json.power_cycle_count = 31;
00:15:37.469  json.power_on_time = {};
00:15:37.469  json.power_on_time.hours = 20880;
00:15:37.469  json.serial_number = "BTLJ83030AK84P0DGN";
00:15:37.469  json.smartctl = {};
00:15:37.469  json.smartctl.argv = [];
00:15:37.469  json.smartctl.argv[0] = "smartctl";
00:15:37.469  json.smartctl.argv[1] = "-d";
00:15:37.469  json.smartctl.argv[2] = "nvme";
00:15:37.469  json.smartctl.argv[3] = "--json=g";
00:15:37.469  json.smartctl.argv[4] = "-a";
00:15:37.469  json.smartctl.build_info = "(local build)";
00:15:37.469  json.smartctl.exit_status = 0;
00:15:37.469  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:15:37.469  json.smartctl.pre_release = false;
00:15:37.469  json.smartctl.svn_revision = "5530";
00:15:37.469  json.smartctl.version = [];
00:15:37.469  json.smartctl.version[0] = 7;
00:15:37.469  json.smartctl.version[1] = 4;
00:15:37.469  json.smart_status = {};
00:15:37.469  json.smart_status.nvme = {};
00:15:37.469  json.smart_status.nvme.value = 0;
00:15:37.469  json.smart_status.passed = true;
00:15:37.469  json.smart_support = {};
00:15:37.469  json.smart_support.available = true;
00:15:37.469  json.smart_support.enabled = true;
00:15:37.469  json.temperature = {};
00:15:37.469  json.temperature.current = 37;'
00:15:37.469    00:46:26	-- cuse/spdk_smartctl_cuse.sh@51 -- # diff '--changed-group-format=%<' --unchanged-group-format= /dev/fd/62 /dev/fd/61
00:15:37.469     00:46:26	-- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {};
00:15:37.469  json.device = {};
00:15:37.469  json.device.protocol = "NVMe";
00:15:37.469  json.device.type = "nvme";
00:15:37.469  json.firmware_version = "VDV10184";
00:15:37.469  json.json_format_version = [];
00:15:37.469  json.json_format_version[0] = 1;
00:15:37.469  json.json_format_version[1] = 0;
00:15:37.469  json.local_time = {};
00:15:37.469  json.local_time.asctime = "Tue Dec 17 00:46:10 2024 CET";
00:15:37.469  json.local_time.time_t = 1734392770;
00:15:37.473  json.model_name = "INTEL SSDPE2KX040T8";
00:15:37.473  json.nvme_controller_id = 0;
00:15:37.473  json.nvme_error_information_log = {};
00:15:37.473  json.nvme_error_information_log.read = 16;
00:15:37.473  json.nvme_error_information_log.size = 64;
00:15:37.473  json.nvme_error_information_log.table = [];
00:15:37.473  json.nvme_error_information_log.table[0] = {};
00:15:37.473  json.nvme_error_information_log.table[0].error_count = 38801;
00:15:37.473  json.nvme_error_information_log.table[0].lba = {};
00:15:37.473  json.nvme_error_information_log.table[0].lba.value = 0;
00:15:37.473  json.nvme_error_information_log.table[0].phase_tag = false;
00:15:37.473  json.nvme_error_information_log.table[0].status_field = {};
00:15:37.473  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:15:37.473  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:15:37.473  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:15:37.473  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:15:37.473  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:15:37.473  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:15:37.473  json.nvme_error_information_log.table[1] = {};
00:15:37.474  json.nvme_error_information_log.table[10] = {};
00:15:37.474  json.nvme_error_information_log.table[10].error_count = 38791;
00:15:37.474  json.nvme_error_information_log.table[10].lba = {};
00:15:37.474  json.nvme_error_information_log.table[10].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[10].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[10].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:15:37.474  json.nvme_error_information_log.table[11] = {};
00:15:37.474  json.nvme_error_information_log.table[11].error_count = 38790;
00:15:37.474  json.nvme_error_information_log.table[11].lba = {};
00:15:37.474  json.nvme_error_information_log.table[11].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[11].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[11].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:15:37.474  json.nvme_error_information_log.table[12] = {};
00:15:37.474  json.nvme_error_information_log.table[12].error_count = 38789;
00:15:37.474  json.nvme_error_information_log.table[12].lba = {};
00:15:37.474  json.nvme_error_information_log.table[12].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[12].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[12].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:15:37.474  json.nvme_error_information_log.table[13] = {};
00:15:37.474  json.nvme_error_information_log.table[13].error_count = 38788;
00:15:37.474  json.nvme_error_information_log.table[13].lba = {};
00:15:37.474  json.nvme_error_information_log.table[13].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[13].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[13].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:15:37.474  json.nvme_error_information_log.table[14] = {};
00:15:37.474  json.nvme_error_information_log.table[14].error_count = 38787;
00:15:37.474  json.nvme_error_information_log.table[14].lba = {};
00:15:37.474  json.nvme_error_information_log.table[14].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[14].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[14].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:15:37.474  json.nvme_error_information_log.table[15] = {};
00:15:37.474  json.nvme_error_information_log.table[15].error_count = 38786;
00:15:37.474  json.nvme_error_information_log.table[15].lba = {};
00:15:37.474  json.nvme_error_information_log.table[15].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[15].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[15].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:15:37.474  json.nvme_error_information_log.table[1].error_count = 38800;
00:15:37.474  json.nvme_error_information_log.table[1].lba = {};
00:15:37.474  json.nvme_error_information_log.table[1].lba.value = 0;
00:15:37.474  json.nvme_error_information_log.table[1].phase_tag = false;
00:15:37.474  json.nvme_error_information_log.table[1].status_field = {};
00:15:37.474  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:15:37.474  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:15:37.474  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:15:37.474  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:15:37.474  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:15:37.474  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:15:37.474  json.nvme_error_information_log.table[2] = {};
00:15:37.474  json.nvme_error_information_log.table[2].error_count = 38799;
00:15:37.475  json.nvme_error_information_log.table[2].lba = {};
00:15:37.475  json.nvme_error_information_log.table[2].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[2].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[2].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:15:37.475  json.nvme_error_information_log.table[3] = {};
00:15:37.475  json.nvme_error_information_log.table[3].error_count = 38798;
00:15:37.475  json.nvme_error_information_log.table[3].lba = {};
00:15:37.475  json.nvme_error_information_log.table[3].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[3].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[3].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:15:37.475  json.nvme_error_information_log.table[4] = {};
00:15:37.475  json.nvme_error_information_log.table[4].error_count = 38797;
00:15:37.475  json.nvme_error_information_log.table[4].lba = {};
00:15:37.475  json.nvme_error_information_log.table[4].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[4].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[4].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:15:37.475  json.nvme_error_information_log.table[5] = {};
00:15:37.475  json.nvme_error_information_log.table[5].error_count = 38796;
00:15:37.475  json.nvme_error_information_log.table[5].lba = {};
00:15:37.475  json.nvme_error_information_log.table[5].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[5].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[5].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:15:37.475  json.nvme_error_information_log.table[6] = {};
00:15:37.475  json.nvme_error_information_log.table[6].error_count = 38795;
00:15:37.475  json.nvme_error_information_log.table[6].lba = {};
00:15:37.475  json.nvme_error_information_log.table[6].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[6].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[6].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:15:37.475  json.nvme_error_information_log.table[7] = {};
00:15:37.475  json.nvme_error_information_log.table[7].error_count = 38794;
00:15:37.475  json.nvme_error_information_log.table[7].lba = {};
00:15:37.475  json.nvme_error_information_log.table[7].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[7].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[7].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:15:37.475  json.nvme_error_information_log.table[8] = {};
00:15:37.475  json.nvme_error_information_log.table[8].error_count = 38793;
00:15:37.475  json.nvme_error_information_log.table[8].lba = {};
00:15:37.475  json.nvme_error_information_log.table[8].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[8].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[8].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:15:37.475  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:15:37.475  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:15:37.475  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:15:37.475  json.nvme_error_information_log.table[9] = {};
00:15:37.475  json.nvme_error_information_log.table[9].error_count = 38792;
00:15:37.475  json.nvme_error_information_log.table[9].lba = {};
00:15:37.475  json.nvme_error_information_log.table[9].lba.value = 0;
00:15:37.475  json.nvme_error_information_log.table[9].phase_tag = false;
00:15:37.475  json.nvme_error_information_log.table[9].status_field = {};
00:15:37.475  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:15:37.475  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:15:37.475  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:15:37.476  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:15:37.476  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:15:37.476  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:15:37.476  json.nvme_error_information_log.unread = 48;
00:15:37.476  json.nvme_ieee_oui_identifier = 6083300;
00:15:37.476  json.nvme_number_of_namespaces = 128;
00:15:37.476  json.nvme_pci_vendor = {};
00:15:37.476  json.nvme_pci_vendor.id = 32902;
00:15:37.476  json.nvme_pci_vendor.subsystem_id = 32902;
00:15:37.476  json.nvme_smart_health_information_log = {};
00:15:37.476  json.nvme_smart_health_information_log.available_spare = 99;
00:15:37.476  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:15:37.476  json.nvme_smart_health_information_log.controller_busy_time = 3927;
00:15:37.476  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:15:37.476  json.nvme_smart_health_information_log.critical_warning = 0;
00:15:37.476  json.nvme_smart_health_information_log.data_units_read = 631286614;
00:15:37.476  json.nvme_smart_health_information_log.data_units_written = 792639254;
00:15:37.476  json.nvme_smart_health_information_log.host_reads = 37097247491;
00:15:37.476  json.nvme_smart_health_information_log.host_writes = 43076543781;
00:15:37.476  json.nvme_smart_health_information_log.media_errors = 0;
00:15:37.476  json.nvme_smart_health_information_log.num_err_log_entries = 38801;
00:15:37.476  json.nvme_smart_health_information_log.percentage_used = 32;
00:15:37.476  json.nvme_smart_health_information_log.power_cycles = 31;
00:15:37.476  json.nvme_smart_health_information_log.power_on_hours = 20880;
00:15:37.476  json.nvme_smart_health_information_log.temperature = 37;
00:15:37.476  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:15:37.476  json.nvme_smart_health_information_log.warning_temp_time = 2211;
00:15:37.476  json.nvme_total_capacity = 4000787030016;
00:15:37.476  json.nvme_unallocated_capacity = 0;
00:15:37.476  json.nvme_version = {};
00:15:37.476  json.nvme_version.string = "1.2";
00:15:37.476  json.nvme_version.value = 66048;
00:15:37.476  json.power_cycle_count = 31;
00:15:37.476  json.power_on_time = {};
00:15:37.476  json.power_on_time.hours = 20880;
00:15:37.476  json.serial_number = "BTLJ83030AK84P0DGN";
00:15:37.476  json.smartctl = {};
00:15:37.476  json.smartctl.argv = [];
00:15:37.476  json.smartctl.argv[0] = "smartctl";
00:15:37.476  json.smartctl.argv[1] = "-d";
00:15:37.476  json.smartctl.argv[2] = "nvme";
00:15:37.476  json.smartctl.argv[3] = "--json=g";
00:15:37.476  json.smartctl.argv[4] = "-a";
00:15:37.476  json.smartctl.build_info = "(local build)";
00:15:37.476  json.smartctl.exit_status = 0;
00:15:37.476  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:15:37.476  json.smartctl.pre_release = false;
00:15:37.476  json.smartctl.svn_revision = "5530";
00:15:37.476  json.smartctl.version = [];
00:15:37.476  json.smartctl.version[0] = 7;
00:15:37.476  json.smartctl.version[1] = 4;
00:15:37.476  json.smart_status = {};
00:15:37.476  json.smart_status.nvme = {};
00:15:37.476  json.smart_status.nvme.value = 0;
00:15:37.476  json.smart_status.passed = true;
00:15:37.476  json.smart_support = {};
00:15:37.476  json.smart_support.available = true;
00:15:37.476  json.smart_support.enabled = true;
00:15:37.476  json.temperature = {};
00:15:37.476  json.temperature.current = 37;'
00:15:37.476     00:46:26	-- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {};
00:15:37.476  json.device = {};
00:15:37.476  json.device.protocol = "NVMe";
00:15:37.476  json.device.type = "nvme";
00:15:37.476  json.firmware_version = "VDV10184";
00:15:37.476  json.json_format_version = [];
00:15:37.476  json.json_format_version[0] = 1;
00:15:37.476  json.json_format_version[1] = 0;
00:15:37.476  json.local_time = {};
00:15:37.476  json.local_time.asctime = "Tue Dec 17 00:46:26 2024 CET";
00:15:37.476  json.local_time.time_t = 1734392786;
00:15:37.476  json.model_name = "INTEL SSDPE2KX040T8";
00:15:37.476  json.nvme_controller_id = 0;
00:15:37.476  json.nvme_error_information_log = {};
00:15:37.476  json.nvme_error_information_log.read = 16;
00:15:37.476  json.nvme_error_information_log.size = 64;
00:15:37.476  json.nvme_error_information_log.table = [];
00:15:37.476  json.nvme_error_information_log.table[0] = {};
00:15:37.476  json.nvme_error_information_log.table[0].error_count = 38801;
00:15:37.476  json.nvme_error_information_log.table[0].lba = {};
00:15:37.476  json.nvme_error_information_log.table[0].lba.value = 0;
00:15:37.476  json.nvme_error_information_log.table[0].phase_tag = false;
00:15:37.476  json.nvme_error_information_log.table[0].status_field = {};
00:15:37.476  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:15:37.476  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:15:37.476  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:15:37.476  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:15:37.476  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:15:37.476  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:15:37.476  json.nvme_error_information_log.table[1] = {};
00:15:37.476  json.nvme_error_information_log.table[10] = {};
00:15:37.476  json.nvme_error_information_log.table[10].error_count = 38791;
00:15:37.476  json.nvme_error_information_log.table[10].lba = {};
00:15:37.476  json.nvme_error_information_log.table[10].lba.value = 0;
00:15:37.476  json.nvme_error_information_log.table[10].phase_tag = false;
00:15:37.476  json.nvme_error_information_log.table[10].status_field = {};
00:15:37.476  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:15:37.476  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:15:37.476  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:15:37.477  json.nvme_error_information_log.table[11] = {};
00:15:37.477  json.nvme_error_information_log.table[11].error_count = 38790;
00:15:37.477  json.nvme_error_information_log.table[11].lba = {};
00:15:37.477  json.nvme_error_information_log.table[11].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[11].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[11].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:15:37.477  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:15:37.477  json.nvme_error_information_log.table[12] = {};
00:15:37.477  json.nvme_error_information_log.table[12].error_count = 38789;
00:15:37.477  json.nvme_error_information_log.table[12].lba = {};
00:15:37.477  json.nvme_error_information_log.table[12].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[12].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[12].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:15:37.477  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:15:37.477  json.nvme_error_information_log.table[13] = {};
00:15:37.477  json.nvme_error_information_log.table[13].error_count = 38788;
00:15:37.477  json.nvme_error_information_log.table[13].lba = {};
00:15:37.477  json.nvme_error_information_log.table[13].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[13].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[13].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:15:37.477  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:15:37.477  json.nvme_error_information_log.table[14] = {};
00:15:37.477  json.nvme_error_information_log.table[14].error_count = 38787;
00:15:37.477  json.nvme_error_information_log.table[14].lba = {};
00:15:37.477  json.nvme_error_information_log.table[14].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[14].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[14].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:15:37.477  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:15:37.477  json.nvme_error_information_log.table[15] = {};
00:15:37.477  json.nvme_error_information_log.table[15].error_count = 38786;
00:15:37.477  json.nvme_error_information_log.table[15].lba = {};
00:15:37.477  json.nvme_error_information_log.table[15].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[15].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[15].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:15:37.477  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:15:37.477  json.nvme_error_information_log.table[1].error_count = 38800;
00:15:37.477  json.nvme_error_information_log.table[1].lba = {};
00:15:37.477  json.nvme_error_information_log.table[1].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[1].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[1].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:15:37.477  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:15:37.477  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:15:37.477  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:15:37.477  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:15:37.477  json.nvme_error_information_log.table[2] = {};
00:15:37.477  json.nvme_error_information_log.table[2].error_count = 38799;
00:15:37.477  json.nvme_error_information_log.table[2].lba = {};
00:15:37.477  json.nvme_error_information_log.table[2].lba.value = 0;
00:15:37.477  json.nvme_error_information_log.table[2].phase_tag = false;
00:15:37.477  json.nvme_error_information_log.table[2].status_field = {};
00:15:37.477  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:15:37.477  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:15:37.478  json.nvme_error_information_log.table[3] = {};
00:15:37.478  json.nvme_error_information_log.table[3].error_count = 38798;
00:15:37.478  json.nvme_error_information_log.table[3].lba = {};
00:15:37.478  json.nvme_error_information_log.table[3].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[3].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[3].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:15:37.478  json.nvme_error_information_log.table[4] = {};
00:15:37.478  json.nvme_error_information_log.table[4].error_count = 38797;
00:15:37.478  json.nvme_error_information_log.table[4].lba = {};
00:15:37.478  json.nvme_error_information_log.table[4].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[4].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[4].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:15:37.478  json.nvme_error_information_log.table[5] = {};
00:15:37.478  json.nvme_error_information_log.table[5].error_count = 38796;
00:15:37.478  json.nvme_error_information_log.table[5].lba = {};
00:15:37.478  json.nvme_error_information_log.table[5].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[5].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[5].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:15:37.478  json.nvme_error_information_log.table[6] = {};
00:15:37.478  json.nvme_error_information_log.table[6].error_count = 38795;
00:15:37.478  json.nvme_error_information_log.table[6].lba = {};
00:15:37.478  json.nvme_error_information_log.table[6].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[6].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[6].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:15:37.478  json.nvme_error_information_log.table[7] = {};
00:15:37.478  json.nvme_error_information_log.table[7].error_count = 38794;
00:15:37.478  json.nvme_error_information_log.table[7].lba = {};
00:15:37.478  json.nvme_error_information_log.table[7].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[7].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[7].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:15:37.478  json.nvme_error_information_log.table[8] = {};
00:15:37.478  json.nvme_error_information_log.table[8].error_count = 38793;
00:15:37.478  json.nvme_error_information_log.table[8].lba = {};
00:15:37.478  json.nvme_error_information_log.table[8].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[8].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[8].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:15:37.478  json.nvme_error_information_log.table[9] = {};
00:15:37.478  json.nvme_error_information_log.table[9].error_count = 38792;
00:15:37.478  json.nvme_error_information_log.table[9].lba = {};
00:15:37.478  json.nvme_error_information_log.table[9].lba.value = 0;
00:15:37.478  json.nvme_error_information_log.table[9].phase_tag = false;
00:15:37.478  json.nvme_error_information_log.table[9].status_field = {};
00:15:37.478  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:15:37.478  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:15:37.478  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:15:37.478  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:15:37.478  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:15:37.478  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:15:37.478  json.nvme_error_information_log.unread = 48;
00:15:37.478  json.nvme_ieee_oui_identifier = 6083300;
00:15:37.478  json.nvme_number_of_namespaces = 128;
00:15:37.478  json.nvme_pci_vendor = {};
00:15:37.478  json.nvme_pci_vendor.id = 32902;
00:15:37.478  json.nvme_pci_vendor.subsystem_id = 32902;
00:15:37.478  json.nvme_smart_health_information_log = {};
00:15:37.478  json.nvme_smart_health_information_log.available_spare = 99;
00:15:37.478  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:15:37.478  json.nvme_smart_health_information_log.controller_busy_time = 3927;
00:15:37.479  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:15:37.479  json.nvme_smart_health_information_log.critical_warning = 0;
00:15:37.479  json.nvme_smart_health_information_log.data_units_read = 631286616;
00:15:37.479  json.nvme_smart_health_information_log.data_units_written = 792639254;
00:15:37.479  json.nvme_smart_health_information_log.host_reads = 37097247546;
00:15:37.479  json.nvme_smart_health_information_log.host_writes = 43076543781;
00:15:37.479  json.nvme_smart_health_information_log.media_errors = 0;
00:15:37.479  json.nvme_smart_health_information_log.num_err_log_entries = 38801;
00:15:37.479  json.nvme_smart_health_information_log.percentage_used = 32;
00:15:37.479  json.nvme_smart_health_information_log.power_cycles = 31;
00:15:37.479  json.nvme_smart_health_information_log.power_on_hours = 20880;
00:15:37.479  json.nvme_smart_health_information_log.temperature = 37;
00:15:37.479  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:15:37.479  json.nvme_smart_health_information_log.warning_temp_time = 2211;
00:15:37.479  json.nvme_total_capacity = 4000787030016;
00:15:37.479  json.nvme_unallocated_capacity = 0;
00:15:37.479  json.nvme_version = {};
00:15:37.479  json.nvme_version.string = "1.2";
00:15:37.479  json.nvme_version.value = 66048;
00:15:37.479  json.power_cycle_count = 31;
00:15:37.479  json.power_on_time = {};
00:15:37.479  json.power_on_time.hours = 20880;
00:15:37.479  json.serial_number = "BTLJ83030AK84P0DGN";
00:15:37.479  json.smartctl = {};
00:15:37.479  json.smartctl.argv = [];
00:15:37.479  json.smartctl.argv[0] = "smartctl";
00:15:37.479  json.smartctl.argv[1] = "-d";
00:15:37.479  json.smartctl.argv[2] = "nvme";
00:15:37.479  json.smartctl.argv[3] = "--json=g";
00:15:37.479  json.smartctl.argv[4] = "-a";
00:15:37.479  json.smartctl.build_info = "(local build)";
00:15:37.479  json.smartctl.exit_status = 0;
00:15:37.479  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:15:37.479  json.smartctl.pre_release = false;
00:15:37.479  json.smartctl.svn_revision = "5530";
00:15:37.479  json.smartctl.version = [];
00:15:37.479  json.smartctl.version[0] = 7;
00:15:37.479  json.smartctl.version[1] = 4;
00:15:37.479  json.smart_status = {};
00:15:37.479  json.smart_status.nvme = {};
00:15:37.479  json.smart_status.nvme.value = 0;
00:15:37.479  json.smart_status.passed = true;
00:15:37.479  json.smart_support = {};
00:15:37.479  json.smart_support.available = true;
00:15:37.480  json.smart_support.enabled = true;
00:15:37.480  json.temperature = {};
00:15:37.480  json.temperature.current = 37;'
00:15:37.480    00:46:26	-- cuse/spdk_smartctl_cuse.sh@51 -- # true
00:15:37.480   00:46:26	-- cuse/spdk_smartctl_cuse.sh@51 -- # DIFF_SMART_JSON='json.local_time.asctime = "Tue Dec 17 00:46:10 2024 CET";
00:15:37.480  json.local_time.time_t = 1734392770;
00:15:37.480  json.nvme_smart_health_information_log.data_units_read = 631286614;
00:15:37.480  json.nvme_smart_health_information_log.host_reads = 37097247491;'
00:15:37.480    00:46:26	-- cuse/spdk_smartctl_cuse.sh@54 -- # grep -v 'json\.nvme_smart_health_information_log\.\|json\.local_time\.\|json\.temperature\.\|json\.power_on_time\.hours'
00:15:37.480    00:46:26	-- cuse/spdk_smartctl_cuse.sh@54 -- # true
00:15:37.480   00:46:26	-- cuse/spdk_smartctl_cuse.sh@54 -- # ERR_SMART_JSON=
00:15:37.480   00:46:26	-- cuse/spdk_smartctl_cuse.sh@56 -- # '[' -n '' ']'
00:15:37.480    00:46:26	-- cuse/spdk_smartctl_cuse.sh@61 -- # smartctl -d nvme -l error /dev/spdk/nvme0
00:15:37.480  [2024-12-17 00:46:26.273345] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:15:37.480   00:46:26	-- cuse/spdk_smartctl_cuse.sh@61 -- # CUSE_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.480  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.480  
00:15:37.480  === START OF SMART DATA SECTION ===
00:15:37.480  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:15:37.480  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:15:37.480    0      38801     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    1      38800     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    2      38799     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    3      38798     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    4      38797     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    5      38796     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    6      38795     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    7      38794     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    8      38793     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    9      38792     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   10      38791     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   11      38790     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   12      38789     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   13      38788     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   14      38787     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   15      38786     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480  ... (48 entries not read)'
00:15:37.480   00:46:26	-- cuse/spdk_smartctl_cuse.sh@62 -- # '[' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.480  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.480  
00:15:37.480  === START OF SMART DATA SECTION ===
00:15:37.480  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:15:37.480  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:15:37.480    0      38801     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    1      38800     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    2      38799     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    3      38798     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    4      38797     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    5      38796     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    6      38795     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    7      38794     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    8      38793     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    9      38792     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   10      38791     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   11      38790     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   12      38789     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   13      38788     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   14      38787     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   15      38786     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480  ... (48 entries not read)' '!=' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.480  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.480  
00:15:37.480  === START OF SMART DATA SECTION ===
00:15:37.480  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:15:37.480  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:15:37.480    0      38801     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    1      38800     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    2      38799     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    3      38798     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    4      38797     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    5      38796     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    6      38795     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    7      38794     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    8      38793     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480    9      38792     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   10      38791     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   11      38790     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   12      38789     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   13      38788     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   14      38787     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480   15      38786     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.480  ... (48 entries not read)' ']'
00:15:37.480   00:46:26	-- cuse/spdk_smartctl_cuse.sh@68 -- # smartctl -d nvme -i /dev/spdk/nvme0n1
00:15:37.480  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.480  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.480  
00:15:37.480  === START OF INFORMATION SECTION ===
00:15:37.480  Model Number:                       INTEL SSDPE2KX040T8
00:15:37.480  Serial Number:                      BTLJ83030AK84P0DGN
00:15:37.480  Firmware Version:                   VDV10184
00:15:37.480  PCI Vendor/Subsystem ID:            0x8086
00:15:37.480  IEEE OUI Identifier:                0x5cd2e4
00:15:37.480  Total NVM Capacity:                 4,000,787,030,016 [4.00 TB]
00:15:37.480  Unallocated NVM Capacity:           0
00:15:37.480  Controller ID:                      0
00:15:37.480  NVMe Version:                       1.2
00:15:37.480  Number of Namespaces:               128
00:15:37.480  Namespace 1 Size/Capacity:          4,000,787,030,016 [4.00 TB]
00:15:37.480  Namespace 1 Formatted LBA Size:     512
00:15:37.480  Namespace 1 IEEE EUI-64:            000000 000000f76e
00:15:37.480  Local Time is:                      Tue Dec 17 00:46:26 2024 CET
00:15:37.481  
00:15:37.481   00:46:26	-- cuse/spdk_smartctl_cuse.sh@69 -- # smartctl -d nvme -c /dev/spdk/nvme0
00:15:37.481  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.481  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.481  
00:15:37.481  [2024-12-17 00:46:26.410008] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:15:37.481  === START OF INFORMATION SECTION ===
00:15:37.481  Firmware Updates (0x18):            4 Slots, no Reset required
00:15:37.481  Optional Admin Commands (0x000e):   Format Frmw_DL NS_Mngmt
00:15:37.481  Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
00:15:37.481  Log Page Attributes (0x0e):         Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
00:15:37.481  Maximum Data Transfer Size:         32 Pages
00:15:37.481  Warning  Comp. Temp. Threshold:     70 Celsius
00:15:37.481  Critical Comp. Temp. Threshold:     80 Celsius
00:15:37.481  
00:15:37.481  Supported Power States
00:15:37.481  St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
00:15:37.481   0 +    20.00W       -        -    0  0  0  0        0       0
00:15:37.481  
00:15:37.481   00:46:26	-- cuse/spdk_smartctl_cuse.sh@70 -- # smartctl -d nvme -A /dev/spdk/nvme0
00:15:37.481  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.481  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.481  
00:15:37.481  [2024-12-17 00:46:26.450136] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:15:37.481  === START OF SMART DATA SECTION ===
00:15:37.481  SMART/Health Information (NVMe Log 0x02)
00:15:37.481  Critical Warning:                   0x00
00:15:37.481  Temperature:                        37 Celsius
00:15:37.481  Available Spare:                    99%
00:15:37.481  Available Spare Threshold:          10%
00:15:37.481  Percentage Used:                    32%
00:15:37.481  Data Units Read:                    631,286,616 [323 TB]
00:15:37.481  Data Units Written:                 792,639,254 [405 TB]
00:15:37.481  Host Read Commands:                 37,097,247,546
00:15:37.481  Host Write Commands:                43,076,543,781
00:15:37.481  Controller Busy Time:               3,927
00:15:37.481  Power Cycles:                       31
00:15:37.481  Power On Hours:                     20,880
00:15:37.481  Unsafe Shutdowns:                   46
00:15:37.481  Media and Data Integrity Errors:    0
00:15:37.481  Error Information Log Entries:      38,801
00:15:37.481  Warning  Comp. Temperature Time:    2211
00:15:37.481  Critical Comp. Temperature Time:    0
00:15:37.481  
00:15:37.481   00:46:26	-- cuse/spdk_smartctl_cuse.sh@73 -- # smartctl -d nvme -x /dev/spdk/nvme0
00:15:37.481  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.481  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.481  
00:15:37.481  [2024-12-17 00:46:26.520296] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:15:37.481  === START OF INFORMATION SECTION ===
00:15:37.481  Model Number:                       INTEL SSDPE2KX040T8
00:15:37.481  Serial Number:                      BTLJ83030AK84P0DGN
00:15:37.481  Firmware Version:                   VDV10184
00:15:37.481  PCI Vendor/Subsystem ID:            0x8086
00:15:37.481  IEEE OUI Identifier:                0x5cd2e4
00:15:37.481  Total NVM Capacity:                 4,000,787,030,016 [4.00 TB]
00:15:37.481  Unallocated NVM Capacity:           0
00:15:37.481  Controller ID:                      0
00:15:37.481  NVMe Version:                       1.2
00:15:37.481  Number of Namespaces:               128
00:15:37.481  Local Time is:                      Tue Dec 17 00:46:26 2024 CET
00:15:37.481  Firmware Updates (0x18):            4 Slots, no Reset required
00:15:37.481  Optional Admin Commands (0x000e):   Format Frmw_DL NS_Mngmt
00:15:37.481  Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
00:15:37.481  Log Page Attributes (0x0e):         Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
00:15:37.481  Maximum Data Transfer Size:         32 Pages
00:15:37.481  Warning  Comp. Temp. Threshold:     70 Celsius
00:15:37.481  Critical Comp. Temp. Threshold:     80 Celsius
00:15:37.481  
00:15:37.481  Supported Power States
00:15:37.481  St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
00:15:37.481   0 +    20.00W       -        -    0  0  0  0        0       0
00:15:37.481  
00:15:37.481  === START OF SMART DATA SECTION ===
00:15:37.481  SMART overall-health self-assessment test result: PASSED
00:15:37.481  
00:15:37.481  SMART/Health Information (NVMe Log 0x02)
00:15:37.481  Critical Warning:                   0x00
00:15:37.481  Temperature:                        37 Celsius
00:15:37.481  Available Spare:                    99%
00:15:37.481  Available Spare Threshold:          10%
00:15:37.481  Percentage Used:                    32%
00:15:37.481  Data Units Read:                    631,286,616 [323 TB]
00:15:37.481  Data Units Written:                 792,639,254 [405 TB]
00:15:37.481  Host Read Commands:                 37,097,247,546
00:15:37.481  Host Write Commands:                43,076,543,781
00:15:37.481  Controller Busy Time:               3,927
00:15:37.481  Power Cycles:                       31
00:15:37.481  Power On Hours:                     20,880
00:15:37.481  Unsafe Shutdowns:                   46
00:15:37.481  Media and Data Integrity Errors:    0
00:15:37.481  Error Information Log Entries:      38,801
00:15:37.481  Warning  Comp. Temperature Time:    2211
00:15:37.481  Critical Comp. Temperature Time:    0
00:15:37.481  
00:15:37.481  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:15:37.481  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:15:37.481    0      38801     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    1      38800     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    2      38799     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    3      38798     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    4      38797     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    5      38796     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    6      38795     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    7      38794     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    8      38793     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481    9      38792     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481   10      38791     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481   11      38790     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481   12      38789     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481   13      38788     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481   14      38787     0       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481   15      38786     2       -  0xc00c      -            0     -     -  Internal Error
00:15:37.481  ... (48 entries not read)
00:15:37.481  
00:15:37.481  Self-tests not supported
00:15:37.481  
00:15:37.481   00:46:26	-- cuse/spdk_smartctl_cuse.sh@74 -- # smartctl -d nvme -H /dev/spdk/nvme0
00:15:37.481  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:15:37.481  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:15:37.481  
00:15:37.481  [2024-12-17 00:46:26.606850] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:15:37.481  === START OF SMART DATA SECTION ===
00:15:37.481  SMART overall-health self-assessment test result: PASSED
00:15:37.481  
00:15:37.481   00:46:26	-- cuse/spdk_smartctl_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:15:42.750   00:46:31	-- cuse/spdk_smartctl_cuse.sh@77 -- # sleep 1
00:15:43.319   00:46:32	-- cuse/spdk_smartctl_cuse.sh@78 -- # '[' -c /dev/spdk/nvme1 ']'
00:15:43.319   00:46:32	-- cuse/spdk_smartctl_cuse.sh@82 -- # trap - SIGINT SIGTERM EXIT
00:15:43.319   00:46:32	-- cuse/spdk_smartctl_cuse.sh@83 -- # killprocess 1018813
00:15:43.319   00:46:32	-- common/autotest_common.sh@936 -- # '[' -z 1018813 ']'
00:15:43.319   00:46:32	-- common/autotest_common.sh@940 -- # kill -0 1018813
00:15:43.319    00:46:32	-- common/autotest_common.sh@941 -- # uname
00:15:43.319   00:46:32	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:43.319    00:46:32	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1018813
00:15:43.319   00:46:32	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:43.319   00:46:32	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:43.319   00:46:32	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1018813'
00:15:43.319  killing process with pid 1018813
00:15:43.319   00:46:32	-- common/autotest_common.sh@955 -- # kill 1018813
00:15:43.319   00:46:32	-- common/autotest_common.sh@960 -- # wait 1018813
00:15:43.887  
00:15:43.887  real	0m32.103s
00:15:43.887  user	0m33.864s
00:15:43.887  sys	0m7.283s
00:15:43.887   00:46:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:43.887   00:46:32	-- common/autotest_common.sh@10 -- # set +x
00:15:43.887  ************************************
00:15:43.887  END TEST nvme_smartctl_cuse
00:15:43.887  ************************************
00:15:43.887   00:46:32	-- cuse/nvme_cuse.sh@22 -- # run_test nvme_ns_manage_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh
00:15:43.887   00:46:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:15:43.887   00:46:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:43.887   00:46:32	-- common/autotest_common.sh@10 -- # set +x
00:15:43.887  ************************************
00:15:43.887  START TEST nvme_ns_manage_cuse
00:15:43.887  ************************************
00:15:43.887   00:46:32	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh
00:15:43.887  * Looking for test storage...
00:15:43.887  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:15:43.887     00:46:33	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:43.887      00:46:33	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:43.887      00:46:33	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:43.887     00:46:33	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:43.887     00:46:33	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:43.887     00:46:33	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:43.887     00:46:33	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:43.887     00:46:33	-- scripts/common.sh@335 -- # IFS=.-:
00:15:43.887     00:46:33	-- scripts/common.sh@335 -- # read -ra ver1
00:15:43.887     00:46:33	-- scripts/common.sh@336 -- # IFS=.-:
00:15:43.887     00:46:33	-- scripts/common.sh@336 -- # read -ra ver2
00:15:43.887     00:46:33	-- scripts/common.sh@337 -- # local 'op=<'
00:15:43.887     00:46:33	-- scripts/common.sh@339 -- # ver1_l=2
00:15:43.887     00:46:33	-- scripts/common.sh@340 -- # ver2_l=1
00:15:43.887     00:46:33	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:43.887     00:46:33	-- scripts/common.sh@343 -- # case "$op" in
00:15:43.887     00:46:33	-- scripts/common.sh@344 -- # : 1
00:15:43.887     00:46:33	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:43.887     00:46:33	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:43.887      00:46:33	-- scripts/common.sh@364 -- # decimal 1
00:15:43.887      00:46:33	-- scripts/common.sh@352 -- # local d=1
00:15:43.887      00:46:33	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:43.887      00:46:33	-- scripts/common.sh@354 -- # echo 1
00:15:44.146     00:46:33	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:44.146      00:46:33	-- scripts/common.sh@365 -- # decimal 2
00:15:44.146      00:46:33	-- scripts/common.sh@352 -- # local d=2
00:15:44.146      00:46:33	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:44.146      00:46:33	-- scripts/common.sh@354 -- # echo 2
00:15:44.146     00:46:33	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:44.146     00:46:33	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:44.146     00:46:33	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:44.146     00:46:33	-- scripts/common.sh@367 -- # return 0
00:15:44.146     00:46:33	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:44.146     00:46:33	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:44.146  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.146  		--rc genhtml_branch_coverage=1
00:15:44.146  		--rc genhtml_function_coverage=1
00:15:44.146  		--rc genhtml_legend=1
00:15:44.146  		--rc geninfo_all_blocks=1
00:15:44.146  		--rc geninfo_unexecuted_blocks=1
00:15:44.146  		
00:15:44.146  		'
00:15:44.146     00:46:33	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:44.146  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.146  		--rc genhtml_branch_coverage=1
00:15:44.146  		--rc genhtml_function_coverage=1
00:15:44.146  		--rc genhtml_legend=1
00:15:44.146  		--rc geninfo_all_blocks=1
00:15:44.146  		--rc geninfo_unexecuted_blocks=1
00:15:44.146  		
00:15:44.146  		'
00:15:44.146     00:46:33	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:44.146  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.146  		--rc genhtml_branch_coverage=1
00:15:44.146  		--rc genhtml_function_coverage=1
00:15:44.146  		--rc genhtml_legend=1
00:15:44.146  		--rc geninfo_all_blocks=1
00:15:44.146  		--rc geninfo_unexecuted_blocks=1
00:15:44.146  		
00:15:44.146  		'
00:15:44.146     00:46:33	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:44.146  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.146  		--rc genhtml_branch_coverage=1
00:15:44.146  		--rc genhtml_function_coverage=1
00:15:44.146  		--rc genhtml_legend=1
00:15:44.146  		--rc geninfo_all_blocks=1
00:15:44.146  		--rc geninfo_unexecuted_blocks=1
00:15:44.146  		
00:15:44.146  		'
00:15:44.146    00:46:33	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:15:44.146       00:46:33	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:15:44.146      00:46:33	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:15:44.146     00:46:33	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:15:44.146     00:46:33	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:15:44.146      00:46:33	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:44.146      00:46:33	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:44.146      00:46:33	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:44.146       00:46:33	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.146       00:46:33	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.146       00:46:33	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.146       00:46:33	-- paths/export.sh@5 -- # export PATH
00:15:44.146       00:46:33	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.146     00:46:33	-- nvme/functions.sh@10 -- # ctrls=()
00:15:44.146     00:46:33	-- nvme/functions.sh@10 -- # declare -A ctrls
00:15:44.146     00:46:33	-- nvme/functions.sh@11 -- # nvmes=()
00:15:44.146     00:46:33	-- nvme/functions.sh@11 -- # declare -A nvmes
00:15:44.146     00:46:33	-- nvme/functions.sh@12 -- # bdfs=()
00:15:44.146     00:46:33	-- nvme/functions.sh@12 -- # declare -A bdfs
00:15:44.146     00:46:33	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:15:44.146     00:46:33	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:15:44.146     00:46:33	-- nvme/functions.sh@14 -- # nvme_name=
00:15:44.146    00:46:33	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:15:44.146   00:46:33	-- cuse/nvme_ns_manage_cuse.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:15:47.429  Waiting for block devices as requested
00:15:47.429  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:15:47.429  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:15:47.429  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:15:47.429  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:15:47.429  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:15:47.687  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:15:47.687  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:15:47.687  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:15:47.945  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:15:47.945  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:15:47.945  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:15:48.203  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:15:48.203  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:15:48.203  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:15:48.462  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:15:48.462  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:15:48.462  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:15:48.462   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@11 -- # scan_nvme_ctrls
00:15:48.462   00:46:37	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:15:48.462   00:46:37	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:15:48.462   00:46:37	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:15:48.462   00:46:37	-- nvme/functions.sh@49 -- # pci=0000:5e:00.0
00:15:48.462   00:46:37	-- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0
00:15:48.462   00:46:37	-- scripts/common.sh@15 -- # local i
00:15:48.462   00:46:37	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:15:48.462   00:46:37	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:15:48.462   00:46:37	-- scripts/common.sh@24 -- # return 0
00:15:48.462   00:46:37	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:15:48.462   00:46:37	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:15:48.462   00:46:37	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:15:48.462   00:46:37	-- nvme/functions.sh@18 -- # shift
00:15:48.462   00:46:37	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.462    00:46:37	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:15:48.462   00:46:37	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.462   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:15:48.462   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"'
00:15:48.462    00:46:37	-- nvme/functions.sh@23 -- # nvme0[vid]=0x8086
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.462   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:15:48.462   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"'
00:15:48.462    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.462   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.462   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  BTLJ83030AK84P0DGN   ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN  "'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN  '
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  INTEL SSDPE2KX040T8                      ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8                     "'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8                     '
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  VDV10184 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[rab]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  5cd2e4 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  5 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mdts]=5
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x10200 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10200
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x989680 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0xe4e1c0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x200 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x200
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.723   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.723   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:15:48.723    00:46:37	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.723   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mec]=1
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[oacs]=0xe
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x18 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x18
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[lpa]=0xe
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  63 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[elpe]=63
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  353 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[cctemp]=353
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.724   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.724   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:15:48.724    00:46:37	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.724   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[nn]=128
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x6 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x6
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[fna]=0x4
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[vwc]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[sgls]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n   ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[subnqn]=
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0'
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.725   00:46:37	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:15:48.725   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:15:48.725    00:46:37	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:15:48.725   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:15:48.726   00:46:37	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:15:48.726   00:46:37	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:15:48.726   00:46:37	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:15:48.726   00:46:37	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@18 -- # shift
00:15:48.726   00:46:37	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726    00:46:37	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  01000000f76e00000000000000000000 ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000f76e00000000000000000000"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000f76e00000000000000000000
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.726   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  000000000000f76e ]]
00:15:48.726   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="000000000000f76e"'
00:15:48.726    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=000000000000f76e
00:15:48.726   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.727   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.727   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0x2 (in use) ]]
00:15:48.727   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0x2 (in use)"'
00:15:48.727    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0x2 (in use)'
00:15:48.727   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.727   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.727   00:46:37	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:15:48.727   00:46:37	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0   lbads:12 rp:0 "'
00:15:48.727    00:46:37	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0   lbads:12 rp:0 '
00:15:48.727   00:46:37	-- nvme/functions.sh@21 -- # IFS=:
00:15:48.727   00:46:37	-- nvme/functions.sh@21 -- # read -r reg val
00:15:48.727   00:46:37	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:15:48.727   00:46:37	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:15:48.727   00:46:37	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:15:48.727   00:46:37	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0
00:15:48.727   00:46:37	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:15:48.727   00:46:37	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:15:48.727    00:46:37	-- cuse/nvme_ns_manage_cuse.sh@14 -- # get_nvme_with_ns_management
00:15:48.727    00:46:37	-- nvme/functions.sh@153 -- # local _ctrls
00:15:48.727    00:46:37	-- nvme/functions.sh@155 -- # _ctrls=($(get_nvmes_with_ns_management))
00:15:48.727     00:46:37	-- nvme/functions.sh@155 -- # get_nvmes_with_ns_management
00:15:48.727     00:46:37	-- nvme/functions.sh@144 -- # (( 1 == 0 ))
00:15:48.727     00:46:37	-- nvme/functions.sh@146 -- # local ctrl
00:15:48.727     00:46:37	-- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}"
00:15:48.727     00:46:37	-- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt
00:15:48.727     00:46:37	-- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt
00:15:48.727     00:46:37	-- nvme/functions.sh@122 -- # local -A bits
00:15:48.727     00:46:37	-- nvme/functions.sh@125 -- # bits["ss/sr"]=1
00:15:48.727     00:46:37	-- nvme/functions.sh@126 -- # bits["fnvme"]=2
00:15:48.727     00:46:37	-- nvme/functions.sh@127 -- # bits["fc/fi"]=4
00:15:48.727     00:46:37	-- nvme/functions.sh@128 -- # bits["nsmgt"]=8
00:15:48.727     00:46:37	-- nvme/functions.sh@129 -- # bits["self-test"]=16
00:15:48.727     00:46:37	-- nvme/functions.sh@130 -- # bits["directives"]=32
00:15:48.727     00:46:37	-- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64
00:15:48.727     00:46:37	-- nvme/functions.sh@132 -- # bits["virtmgt"]=128
00:15:48.727     00:46:37	-- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256
00:15:48.727     00:46:37	-- nvme/functions.sh@134 -- # bits["getlba"]=512
00:15:48.727     00:46:37	-- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024
00:15:48.727     00:46:37	-- nvme/functions.sh@137 -- # bit=nsmgt
00:15:48.727     00:46:37	-- nvme/functions.sh@138 -- # [[ -n 8 ]]
00:15:48.727      00:46:37	-- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs
00:15:48.727      00:46:37	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs
00:15:48.727      00:46:37	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:15:48.727      00:46:37	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:15:48.727      00:46:37	-- nvme/functions.sh@75 -- # [[ -n 0xe ]]
00:15:48.727      00:46:37	-- nvme/functions.sh@76 -- # echo 0xe
00:15:48.727     00:46:37	-- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] ))
00:15:48.727     00:46:37	-- nvme/functions.sh@148 -- # echo nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@156 -- # (( 1 > 0 ))
00:15:48.727    00:46:37	-- nvme/functions.sh@157 -- # echo nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@158 -- # return 0
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@14 -- # nvme_name=nvme0
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@20 -- # nvme_dev=/dev/nvme0
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@21 -- # bdf=0000:5e:00.0
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@22 -- # nsids=($(get_nvme_nss "$nvme_name"))
00:15:48.727    00:46:37	-- cuse/nvme_ns_manage_cuse.sh@22 -- # get_nvme_nss nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@94 -- # local ctrl=nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@96 -- # [[ -n nvme0_ns ]]
00:15:48.727    00:46:37	-- nvme/functions.sh@97 -- # local -n _nss=nvme0_ns
00:15:48.727    00:46:37	-- nvme/functions.sh@99 -- # echo 1
00:15:48.727    00:46:37	-- cuse/nvme_ns_manage_cuse.sh@25 -- # get_nvme_ctrl_feature nvme0 oaes
00:15:48.727    00:46:37	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oaes
00:15:48.727    00:46:37	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:15:48.727    00:46:37	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@75 -- # [[ -n 0x200 ]]
00:15:48.727    00:46:37	-- nvme/functions.sh@76 -- # echo 0x200
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@25 -- # oaes=0x200
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@26 -- # aer_ns_change=0
00:15:48.727    00:46:37	-- cuse/nvme_ns_manage_cuse.sh@27 -- # get_nvme_ctrl_feature nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=cntlid
00:15:48.727    00:46:37	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:15:48.727    00:46:37	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:15:48.727    00:46:37	-- nvme/functions.sh@75 -- # [[ -n 0 ]]
00:15:48.727    00:46:37	-- nvme/functions.sh@76 -- # echo 0
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@27 -- # cntlid=0
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@70 -- # remove_all_namespaces
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces'
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:48.727  ---
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces'
00:15:48.727  delete all namespaces
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:48.727  ---
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}"
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1'
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:48.727  ---
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1'
00:15:48.727  removing nsid=1
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:48.727  ---
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0
00:15:48.727  detach-ns: Success, nsid:1
00:15:48.727   00:46:37	-- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1
00:16:06.825  delete-ns: Success, deleted nsid:1
00:16:06.825   00:46:55	-- cuse/nvme_ns_manage_cuse.sh@72 -- # reset_nvme_if_aer_unsupported /dev/nvme0
00:16:06.825   00:46:55	-- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]]
00:16:06.825   00:46:55	-- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1
00:16:07.760   00:46:56	-- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0
00:16:07.761   00:46:57	-- cuse/nvme_ns_manage_cuse.sh@73 -- # sleep 1
00:16:09.136   00:46:58	-- cuse/nvme_ns_manage_cuse.sh@75 -- # PCI_ALLOWED=0000:5e:00.0
00:16:09.136   00:46:58	-- cuse/nvme_ns_manage_cuse.sh@75 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:16:11.664  0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0
00:16:11.664  0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1
00:16:11.664  0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2
00:16:11.664  0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3
00:16:11.664  0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4
00:16:11.664  0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5
00:16:11.664  0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6
00:16:11.664  0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7
00:16:11.664  0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0
00:16:11.664  0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1
00:16:11.664  0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2
00:16:11.664  0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3
00:16:11.664  0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4
00:16:11.664  0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5
00:16:11.664  0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6
00:16:11.664  0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7
00:16:14.952  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:16:15.211   00:47:04	-- cuse/nvme_ns_manage_cuse.sh@78 -- # spdk_tgt_pid=1024274
00:16:15.211   00:47:04	-- cuse/nvme_ns_manage_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:16:15.211   00:47:04	-- cuse/nvme_ns_manage_cuse.sh@79 -- # trap 'kill -9 ${spdk_tgt_pid}; clean_up; exit 1' SIGINT SIGTERM EXIT
00:16:15.211   00:47:04	-- cuse/nvme_ns_manage_cuse.sh@81 -- # waitforlisten 1024274
00:16:15.211   00:47:04	-- common/autotest_common.sh@829 -- # '[' -z 1024274 ']'
00:16:15.211   00:47:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:15.211   00:47:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:15.211   00:47:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:15.211  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:15.211   00:47:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:15.211   00:47:04	-- common/autotest_common.sh@10 -- # set +x
00:16:15.211  [2024-12-17 00:47:04.276517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:15.211  [2024-12-17 00:47:04.276580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024274 ]
00:16:15.211  EAL: No free 2048 kB hugepages reported on node 1
00:16:15.211  [2024-12-17 00:47:04.371635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:15.211  [2024-12-17 00:47:04.420925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:15.211  [2024-12-17 00:47:04.421113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:15.211  [2024-12-17 00:47:04.421117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:15.470  [2024-12-17 00:47:04.578849] 'OCF_Core' volume operations registered
00:16:15.470  [2024-12-17 00:47:04.581160] 'OCF_Cache' volume operations registered
00:16:15.470  [2024-12-17 00:47:04.583930] 'OCF Composite' volume operations registered
00:16:15.470  [2024-12-17 00:47:04.586239] 'SPDK_block_device' volume operations registered
00:16:16.041   00:47:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:16.041   00:47:05	-- common/autotest_common.sh@862 -- # return 0
00:16:16.041   00:47:05	-- cuse/nvme_ns_manage_cuse.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:16:19.326  
00:16:19.326   00:47:08	-- cuse/nvme_ns_manage_cuse.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:16:19.326  [2024-12-17 00:47:08.584674] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:16:19.326  [2024-12-17 00:47:08.584837] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:16:19.584   00:47:08	-- cuse/nvme_ns_manage_cuse.sh@86 -- # ctrlr=/dev/spdk/nvme0
00:16:19.584   00:47:08	-- cuse/nvme_ns_manage_cuse.sh@88 -- # sleep 1
00:16:20.519   00:47:09	-- cuse/nvme_ns_manage_cuse.sh@89 -- # [[ -c /dev/spdk/nvme0 ]]
00:16:20.519   00:47:09	-- cuse/nvme_ns_manage_cuse.sh@94 -- # sleep 1
00:16:21.454   00:47:10	-- cuse/nvme_ns_manage_cuse.sh@96 -- # for nsid in "${nsids[@]}"
00:16:21.454   00:47:10	-- cuse/nvme_ns_manage_cuse.sh@97 -- # info_print 'create ns: nsze=10000 ncap=10000 flbias=0'
00:16:21.454   00:47:10	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:16:21.454  ---
00:16:21.454   00:47:10	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'create ns: nsze=10000 ncap=10000 flbias=0'
00:16:21.454  create ns: nsze=10000 ncap=10000 flbias=0
00:16:21.454   00:47:10	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:16:21.454  ---
00:16:21.454   00:47:10	-- cuse/nvme_ns_manage_cuse.sh@98 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/spdk/nvme0 -s 10000 -c 10000 -f 0
00:16:22.019  create-ns: Success, created nsid:1
00:16:22.019   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@99 -- # info_print 'attach ns: nsid=1 controller=0'
00:16:22.019   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:16:22.019  ---
00:16:22.019   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'attach ns: nsid=1 controller=0'
00:16:22.019  attach ns: nsid=1 controller=0
00:16:22.019   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:16:22.019  ---
00:16:22.019   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@100 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/spdk/nvme0 -n 1 -c 0
00:16:22.019  attach-ns: Success, nsid:1
00:16:22.020   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@101 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0
00:16:22.020   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]]
00:16:22.020   00:47:11	-- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1
00:16:22.952   00:47:12	-- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0
00:16:23.210  [2024-12-17 00:47:12.218879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:16:23.210  [2024-12-17 00:47:12.219834] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:16:23.210   00:47:12	-- cuse/nvme_ns_manage_cuse.sh@102 -- # sleep 1
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@103 -- # [[ -c /dev/spdk/nvme0n1 ]]
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@104 -- # info_print 'detach ns: nsid=1 controller=0'
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:16:24.144  ---
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'detach ns: nsid=1 controller=0'
00:16:24.144  detach ns: nsid=1 controller=0
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:16:24.144  ---
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@105 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/spdk/nvme0 -n 1 -c 0
00:16:24.144  detach-ns: Success, nsid:1
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@106 -- # info_print 'delete ns: nsid=1'
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:16:24.144  ---
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete ns: nsid=1'
00:16:24.144  delete ns: nsid=1
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:16:24.144  ---
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/spdk/nvme0 -n 1
00:16:24.144  delete-ns: Success, deleted nsid:1
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@108 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]]
00:16:24.144   00:47:13	-- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1
00:16:25.078   00:47:14	-- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0
00:16:25.078  [2024-12-17 00:47:14.294883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:16:25.645   00:47:14	-- cuse/nvme_ns_manage_cuse.sh@109 -- # sleep 1
00:16:26.578   00:47:15	-- cuse/nvme_ns_manage_cuse.sh@110 -- # [[ ! -c /dev/spdk/nvme0n1 ]]
00:16:26.578   00:47:15	-- cuse/nvme_ns_manage_cuse.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:16:30.766   00:47:19	-- cuse/nvme_ns_manage_cuse.sh@120 -- # sleep 1
00:16:31.700   00:47:20	-- cuse/nvme_ns_manage_cuse.sh@121 -- # [[ ! -c /dev/spdk/nvme0 ]]
00:16:31.700   00:47:20	-- cuse/nvme_ns_manage_cuse.sh@123 -- # trap - SIGINT SIGTERM EXIT
00:16:31.700   00:47:20	-- cuse/nvme_ns_manage_cuse.sh@124 -- # killprocess 1024274
00:16:31.700   00:47:20	-- common/autotest_common.sh@936 -- # '[' -z 1024274 ']'
00:16:31.700   00:47:20	-- common/autotest_common.sh@940 -- # kill -0 1024274
00:16:31.700    00:47:20	-- common/autotest_common.sh@941 -- # uname
00:16:31.700   00:47:20	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:31.700    00:47:20	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1024274
00:16:31.960   00:47:20	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:31.960   00:47:20	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:31.960   00:47:20	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1024274'
00:16:31.960  killing process with pid 1024274
00:16:31.960   00:47:20	-- common/autotest_common.sh@955 -- # kill 1024274
00:16:31.960   00:47:20	-- common/autotest_common.sh@960 -- # wait 1024274
00:16:32.219   00:47:21	-- cuse/nvme_ns_manage_cuse.sh@125 -- # clean_up
00:16:32.219   00:47:21	-- cuse/nvme_ns_manage_cuse.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:16:35.504  Waiting for block devices as requested
00:16:35.504  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:16:35.505  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:16:35.505  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:16:40.927  * Events for some block/disk devices (0000:5e:00.0) were not caught, they may be missing
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@48 -- # remove_all_namespaces
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces'
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:16:40.927  ---
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces'
00:16:40.927  delete all namespaces
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:16:40.927  ---
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}"
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1'
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:16:40.927  ---
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1'
00:16:40.927  removing nsid=1
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:16:40.927  ---
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0
00:16:40.927  NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@41 -- # true
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1
00:16:40.927  NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@42 -- # true
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@50 -- # echo 'Restoring /dev/nvme0...'
00:16:40.927  Restoring /dev/nvme0...
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@51 -- # for nsid in "${nsids[@]}"
00:16:40.927    00:47:29	-- cuse/nvme_ns_manage_cuse.sh@52 -- # get_nvme_ns_feature nvme0 1 ncap
00:16:40.927    00:47:29	-- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=ncap
00:16:40.927    00:47:29	-- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns
00:16:40.927    00:47:29	-- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@87 -- # local -n _ns=nvme0n1
00:16:40.927    00:47:29	-- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@90 -- # echo 0x1d1c0beb0
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@52 -- # ncap=0x1d1c0beb0
00:16:40.927    00:47:29	-- cuse/nvme_ns_manage_cuse.sh@53 -- # get_nvme_ns_feature nvme0 1 nsze
00:16:40.927    00:47:29	-- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=nsze
00:16:40.927    00:47:29	-- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns
00:16:40.927    00:47:29	-- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@87 -- # local -n _ns=nvme0n1
00:16:40.927    00:47:29	-- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@90 -- # echo 0x1d1c0beb0
00:16:40.927   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@53 -- # nsze=0x1d1c0beb0
00:16:40.927    00:47:29	-- cuse/nvme_ns_manage_cuse.sh@54 -- # get_active_lbaf nvme0 1
00:16:40.927    00:47:29	-- nvme/functions.sh@103 -- # local ctrl=nvme0 ns=1 reg lbaf
00:16:40.927    00:47:29	-- nvme/functions.sh@105 -- # [[ -n nvme0_ns ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@107 -- # local -n _nss=nvme0_ns
00:16:40.927    00:47:29	-- nvme/functions.sh@108 -- # [[ -n nvme0n1 ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@110 -- # local -n _ns=nvme0n1
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ fpi == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nawupf == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nsfeat == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ endgid == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nawun == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nabspf == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nabo == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nabsn == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nulbaf == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ ncap == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ dpc == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ dps == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nguid == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ noiob == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nacwu == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ mssrl == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ dlfeat == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ nlbaf == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # [[ mc == lbaf* ]]
00:16:40.927    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.927    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.928    00:47:29	-- nvme/functions.sh@113 -- # [[ nmic == lbaf* ]]
00:16:40.928    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.928    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.928    00:47:29	-- nvme/functions.sh@113 -- # [[ nvmsetid == lbaf* ]]
00:16:40.928    00:47:29	-- nvme/functions.sh@113 -- # continue
00:16:40.928    00:47:29	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:16:40.928    00:47:29	-- nvme/functions.sh@113 -- # [[ lbaf0 == lbaf* ]]
00:16:40.928    00:47:29	-- nvme/functions.sh@114 -- # [[ ms:0   lbads:9  rp:0x2 (in use) == *\i\n\ \u\s\e* ]]
00:16:40.928    00:47:29	-- nvme/functions.sh@115 -- # echo 0
00:16:40.928    00:47:29	-- nvme/functions.sh@115 -- # return 0
00:16:40.928   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@54 -- # lbaf=0
00:16:40.928   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/nvme0 -s 0x1d1c0beb0 -c 0x1d1c0beb0 -f 0
00:16:40.928  create-ns: Success, created nsid:1
00:16:40.928   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@56 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/nvme0 -n 1 -c 0
00:16:40.928  attach-ns: Success, nsid:1
00:16:40.928   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@57 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0
00:16:40.928   00:47:29	-- cuse/nvme_ns_manage_cuse.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:16:43.458  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:16:43.458  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:16:43.717  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:16:47.007  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:16:47.007  
00:16:47.007  real	1m3.064s
00:16:47.007  user	0m37.777s
00:16:47.007  sys	0m9.694s
00:16:47.007   00:47:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:47.007   00:47:36	-- common/autotest_common.sh@10 -- # set +x
00:16:47.007  ************************************
00:16:47.007  END TEST nvme_ns_manage_cuse
00:16:47.007  ************************************
00:16:47.007   00:47:36	-- cuse/nvme_cuse.sh@23 -- # rmmod cuse
00:16:47.007   00:47:36	-- cuse/nvme_cuse.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:16:50.295  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:16:50.295  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:16:50.295  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:16:50.295  
00:16:50.295  real	2m56.556s
00:16:50.295  user	2m26.292s
00:16:50.295  sys	0m34.744s
00:16:50.295   00:47:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:50.295   00:47:39	-- common/autotest_common.sh@10 -- # set +x
00:16:50.295  ************************************
00:16:50.295  END TEST nvme_cuse
00:16:50.295  ************************************
00:16:50.295   00:47:39	-- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]]
00:16:50.295   00:47:39	-- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]]
00:16:50.295   00:47:39	-- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]]
00:16:50.295   00:47:39	-- spdk/autotest.sh@233 -- # run_test nvme_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh
00:16:50.295   00:47:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:16:50.295   00:47:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:50.295   00:47:39	-- common/autotest_common.sh@10 -- # set +x
00:16:50.295  ************************************
00:16:50.295  START TEST nvme_rpc
00:16:50.295  ************************************
00:16:50.295   00:47:39	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh
00:16:50.295  * Looking for test storage...
00:16:50.295  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:16:50.295    00:47:39	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:50.295     00:47:39	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:50.295     00:47:39	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:50.295    00:47:39	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:50.295    00:47:39	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:50.295    00:47:39	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:50.295    00:47:39	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:50.295    00:47:39	-- scripts/common.sh@335 -- # IFS=.-:
00:16:50.295    00:47:39	-- scripts/common.sh@335 -- # read -ra ver1
00:16:50.295    00:47:39	-- scripts/common.sh@336 -- # IFS=.-:
00:16:50.295    00:47:39	-- scripts/common.sh@336 -- # read -ra ver2
00:16:50.295    00:47:39	-- scripts/common.sh@337 -- # local 'op=<'
00:16:50.295    00:47:39	-- scripts/common.sh@339 -- # ver1_l=2
00:16:50.295    00:47:39	-- scripts/common.sh@340 -- # ver2_l=1
00:16:50.295    00:47:39	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:50.295    00:47:39	-- scripts/common.sh@343 -- # case "$op" in
00:16:50.295    00:47:39	-- scripts/common.sh@344 -- # : 1
00:16:50.295    00:47:39	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:50.295    00:47:39	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:50.295     00:47:39	-- scripts/common.sh@364 -- # decimal 1
00:16:50.295     00:47:39	-- scripts/common.sh@352 -- # local d=1
00:16:50.296     00:47:39	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:50.296     00:47:39	-- scripts/common.sh@354 -- # echo 1
00:16:50.296    00:47:39	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:50.296     00:47:39	-- scripts/common.sh@365 -- # decimal 2
00:16:50.296     00:47:39	-- scripts/common.sh@352 -- # local d=2
00:16:50.296     00:47:39	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:50.296     00:47:39	-- scripts/common.sh@354 -- # echo 2
00:16:50.296    00:47:39	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:50.296    00:47:39	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:50.296    00:47:39	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:50.296    00:47:39	-- scripts/common.sh@367 -- # return 0
00:16:50.296    00:47:39	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:50.296    00:47:39	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:50.296  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:50.296  		--rc genhtml_branch_coverage=1
00:16:50.296  		--rc genhtml_function_coverage=1
00:16:50.296  		--rc genhtml_legend=1
00:16:50.296  		--rc geninfo_all_blocks=1
00:16:50.296  		--rc geninfo_unexecuted_blocks=1
00:16:50.296  		
00:16:50.296  		'
00:16:50.296    00:47:39	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:50.296  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:50.296  		--rc genhtml_branch_coverage=1
00:16:50.296  		--rc genhtml_function_coverage=1
00:16:50.296  		--rc genhtml_legend=1
00:16:50.296  		--rc geninfo_all_blocks=1
00:16:50.296  		--rc geninfo_unexecuted_blocks=1
00:16:50.296  		
00:16:50.296  		'
00:16:50.296    00:47:39	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:50.296  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:50.296  		--rc genhtml_branch_coverage=1
00:16:50.296  		--rc genhtml_function_coverage=1
00:16:50.296  		--rc genhtml_legend=1
00:16:50.296  		--rc geninfo_all_blocks=1
00:16:50.296  		--rc geninfo_unexecuted_blocks=1
00:16:50.296  		
00:16:50.296  		'
00:16:50.296    00:47:39	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:50.296  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:50.296  		--rc genhtml_branch_coverage=1
00:16:50.296  		--rc genhtml_function_coverage=1
00:16:50.296  		--rc genhtml_legend=1
00:16:50.296  		--rc geninfo_all_blocks=1
00:16:50.296  		--rc geninfo_unexecuted_blocks=1
00:16:50.296  		
00:16:50.296  		'
00:16:50.296   00:47:39	-- nvme/nvme_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:16:50.296    00:47:39	-- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:16:50.296    00:47:39	-- common/autotest_common.sh@1519 -- # bdfs=()
00:16:50.296    00:47:39	-- common/autotest_common.sh@1519 -- # local bdfs
00:16:50.296    00:47:39	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:16:50.296     00:47:39	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:16:50.296     00:47:39	-- common/autotest_common.sh@1508 -- # bdfs=()
00:16:50.296     00:47:39	-- common/autotest_common.sh@1508 -- # local bdfs
00:16:50.296     00:47:39	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:16:50.296      00:47:39	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:16:50.296      00:47:39	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:16:50.554     00:47:39	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:16:50.554     00:47:39	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:16:50.554    00:47:39	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:16:50.554   00:47:39	-- nvme/nvme_rpc.sh@13 -- # bdf=0000:5e:00.0
00:16:50.554   00:47:39	-- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=1030931
00:16:50.554   00:47:39	-- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:16:50.554   00:47:39	-- nvme/nvme_rpc.sh@19 -- # waitforlisten 1030931
00:16:50.554   00:47:39	-- common/autotest_common.sh@829 -- # '[' -z 1030931 ']'
00:16:50.554   00:47:39	-- nvme/nvme_rpc.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:16:50.554   00:47:39	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:50.554   00:47:39	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:50.554   00:47:39	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:50.554  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:50.554   00:47:39	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:50.554   00:47:39	-- common/autotest_common.sh@10 -- # set +x
00:16:50.554  [2024-12-17 00:47:39.688668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:50.554  [2024-12-17 00:47:39.688744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030931 ]
00:16:50.554  EAL: No free 2048 kB hugepages reported on node 1
00:16:50.554  [2024-12-17 00:47:39.797207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:50.812  [2024-12-17 00:47:39.845624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:50.812  [2024-12-17 00:47:39.845807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:50.812  [2024-12-17 00:47:39.845812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:50.812  [2024-12-17 00:47:40.001334] 'OCF_Core' volume operations registered
00:16:50.812  [2024-12-17 00:47:40.003986] 'OCF_Cache' volume operations registered
00:16:50.812  [2024-12-17 00:47:40.006869] 'OCF Composite' volume operations registered
00:16:50.812  [2024-12-17 00:47:40.009063] 'SPDK_block_device' volume operations registered
00:16:51.379   00:47:40	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:51.379   00:47:40	-- common/autotest_common.sh@862 -- # return 0
00:16:51.380   00:47:40	-- nvme/nvme_rpc.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:16:54.667  Nvme0n1
00:16:54.667   00:47:43	-- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:16:54.667   00:47:43	-- nvme/nvme_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:16:54.925  request:
00:16:54.925  {
00:16:54.925    "filename": "non_existing_file",
00:16:54.925    "bdev_name": "Nvme0n1",
00:16:54.925    "method": "bdev_nvme_apply_firmware",
00:16:54.925    "req_id": 1
00:16:54.925  }
00:16:54.925  Got JSON-RPC error response
00:16:54.925  response:
00:16:54.925  {
00:16:54.925    "code": -32603,
00:16:54.925    "message": "open file failed."
00:16:54.925  }
00:16:54.925   00:47:43	-- nvme/nvme_rpc.sh@32 -- # rv=1
00:16:54.925   00:47:43	-- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:16:54.925   00:47:43	-- nvme/nvme_rpc.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:16:59.141   00:47:47	-- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:16:59.141   00:47:47	-- nvme/nvme_rpc.sh@40 -- # killprocess 1030931
00:16:59.141   00:47:47	-- common/autotest_common.sh@936 -- # '[' -z 1030931 ']'
00:16:59.141   00:47:47	-- common/autotest_common.sh@940 -- # kill -0 1030931
00:16:59.141    00:47:47	-- common/autotest_common.sh@941 -- # uname
00:16:59.141   00:47:47	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:59.141    00:47:47	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1030931
00:16:59.141   00:47:47	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:59.141   00:47:47	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:59.141   00:47:47	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1030931'
00:16:59.141  killing process with pid 1030931
00:16:59.141   00:47:47	-- common/autotest_common.sh@955 -- # kill 1030931
00:16:59.141   00:47:47	-- common/autotest_common.sh@960 -- # wait 1030931
00:16:59.141  
00:16:59.141  real	0m9.064s
00:16:59.141  user	0m17.199s
00:16:59.141  sys	0m0.907s
00:16:59.141   00:47:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:59.141   00:47:48	-- common/autotest_common.sh@10 -- # set +x
00:16:59.141  ************************************
00:16:59.141  END TEST nvme_rpc
00:16:59.141  ************************************
00:16:59.400   00:47:48	-- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh
00:16:59.400   00:47:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:16:59.400   00:47:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:59.400   00:47:48	-- common/autotest_common.sh@10 -- # set +x
00:16:59.400  ************************************
00:16:59.400  START TEST nvme_rpc_timeouts
00:16:59.400  ************************************
00:16:59.400   00:47:48	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh
00:16:59.400  * Looking for test storage...
00:16:59.400  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:16:59.400    00:47:48	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:59.400     00:47:48	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:59.400     00:47:48	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:59.400    00:47:48	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:59.400    00:47:48	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:59.400    00:47:48	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:59.400    00:47:48	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:59.400    00:47:48	-- scripts/common.sh@335 -- # IFS=.-:
00:16:59.400    00:47:48	-- scripts/common.sh@335 -- # read -ra ver1
00:16:59.400    00:47:48	-- scripts/common.sh@336 -- # IFS=.-:
00:16:59.400    00:47:48	-- scripts/common.sh@336 -- # read -ra ver2
00:16:59.400    00:47:48	-- scripts/common.sh@337 -- # local 'op=<'
00:16:59.400    00:47:48	-- scripts/common.sh@339 -- # ver1_l=2
00:16:59.400    00:47:48	-- scripts/common.sh@340 -- # ver2_l=1
00:16:59.400    00:47:48	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:59.400    00:47:48	-- scripts/common.sh@343 -- # case "$op" in
00:16:59.400    00:47:48	-- scripts/common.sh@344 -- # : 1
00:16:59.400    00:47:48	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:59.400    00:47:48	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:59.400     00:47:48	-- scripts/common.sh@364 -- # decimal 1
00:16:59.400     00:47:48	-- scripts/common.sh@352 -- # local d=1
00:16:59.400     00:47:48	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:59.400     00:47:48	-- scripts/common.sh@354 -- # echo 1
00:16:59.400    00:47:48	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:59.400     00:47:48	-- scripts/common.sh@365 -- # decimal 2
00:16:59.400     00:47:48	-- scripts/common.sh@352 -- # local d=2
00:16:59.400     00:47:48	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:59.400     00:47:48	-- scripts/common.sh@354 -- # echo 2
00:16:59.400    00:47:48	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:59.400    00:47:48	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:59.400    00:47:48	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:59.400    00:47:48	-- scripts/common.sh@367 -- # return 0
00:16:59.400    00:47:48	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:59.400    00:47:48	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:59.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:59.400  		--rc genhtml_branch_coverage=1
00:16:59.400  		--rc genhtml_function_coverage=1
00:16:59.400  		--rc genhtml_legend=1
00:16:59.400  		--rc geninfo_all_blocks=1
00:16:59.400  		--rc geninfo_unexecuted_blocks=1
00:16:59.400  		
00:16:59.400  		'
00:16:59.400    00:47:48	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:59.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:59.400  		--rc genhtml_branch_coverage=1
00:16:59.400  		--rc genhtml_function_coverage=1
00:16:59.400  		--rc genhtml_legend=1
00:16:59.400  		--rc geninfo_all_blocks=1
00:16:59.400  		--rc geninfo_unexecuted_blocks=1
00:16:59.400  		
00:16:59.400  		'
00:16:59.400    00:47:48	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:59.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:59.400  		--rc genhtml_branch_coverage=1
00:16:59.400  		--rc genhtml_function_coverage=1
00:16:59.400  		--rc genhtml_legend=1
00:16:59.400  		--rc geninfo_all_blocks=1
00:16:59.400  		--rc geninfo_unexecuted_blocks=1
00:16:59.400  		
00:16:59.400  		'
00:16:59.400    00:47:48	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:59.400  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:59.400  		--rc genhtml_branch_coverage=1
00:16:59.400  		--rc genhtml_function_coverage=1
00:16:59.400  		--rc genhtml_legend=1
00:16:59.401  		--rc geninfo_all_blocks=1
00:16:59.401  		--rc geninfo_unexecuted_blocks=1
00:16:59.401  		
00:16:59.401  		'
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_1032127
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_1032127
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=1032233
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:16:59.401   00:47:48	-- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 1032233
00:16:59.401   00:47:48	-- common/autotest_common.sh@829 -- # '[' -z 1032233 ']'
00:16:59.401   00:47:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:59.401   00:47:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:59.401   00:47:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:59.401  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:59.401   00:47:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:59.401   00:47:48	-- common/autotest_common.sh@10 -- # set +x
00:16:59.659  [2024-12-17 00:47:48.706427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:59.659  [2024-12-17 00:47:48.706501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032233 ]
00:16:59.659  EAL: No free 2048 kB hugepages reported on node 1
00:16:59.659  [2024-12-17 00:47:48.812402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:59.659  [2024-12-17 00:47:48.862617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:59.659  [2024-12-17 00:47:48.862809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:59.659  [2024-12-17 00:47:48.862814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:59.917  [2024-12-17 00:47:49.040089] 'OCF_Core' volume operations registered
00:16:59.917  [2024-12-17 00:47:49.042512] 'OCF_Cache' volume operations registered
00:16:59.917  [2024-12-17 00:47:49.045421] 'OCF Composite' volume operations registered
00:16:59.917  [2024-12-17 00:47:49.047856] 'SPDK_block_device' volume operations registered
00:17:00.483   00:47:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:00.483   00:47:49	-- common/autotest_common.sh@862 -- # return 0
00:17:00.483   00:47:49	-- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:17:00.483  Checking default timeout settings:
00:17:00.483   00:47:49	-- nvme/nvme_rpc_timeouts.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config
00:17:01.067   00:47:50	-- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:17:01.067  Making settings changes with rpc:
00:17:01.067   00:47:50	-- nvme/nvme_rpc_timeouts.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:17:01.067   00:47:50	-- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:17:01.067  Check default vs. modified settings:
00:17:01.067   00:47:50	-- nvme/nvme_rpc_timeouts.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_1032127
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_1032127
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:17:01.632  Setting action_on_timeout is changed as expected.
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_1032127
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_1032127
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:17:01.632  Setting timeout_us is changed as expected.
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_1032127
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_1032127
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:17:01.632    00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:17:01.632  Setting timeout_admin_us is changed as expected.
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_1032127 /tmp/settings_modified_1032127
00:17:01.632   00:47:50	-- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 1032233
00:17:01.632   00:47:50	-- common/autotest_common.sh@936 -- # '[' -z 1032233 ']'
00:17:01.632   00:47:50	-- common/autotest_common.sh@940 -- # kill -0 1032233
00:17:01.632    00:47:50	-- common/autotest_common.sh@941 -- # uname
00:17:01.632   00:47:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:17:01.632    00:47:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1032233
00:17:01.632   00:47:50	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:17:01.632   00:47:50	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:17:01.632   00:47:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1032233'
00:17:01.632  killing process with pid 1032233
00:17:01.632   00:47:50	-- common/autotest_common.sh@955 -- # kill 1032233
00:17:01.632   00:47:50	-- common/autotest_common.sh@960 -- # wait 1032233
00:17:02.198   00:47:51	-- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:17:02.198  RPC TIMEOUT SETTING TEST PASSED.
00:17:02.198  
00:17:02.198  real	0m2.888s
00:17:02.198  user	0m5.839s
00:17:02.198  sys	0m0.878s
00:17:02.198   00:47:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:17:02.198   00:47:51	-- common/autotest_common.sh@10 -- # set +x
00:17:02.198  ************************************
00:17:02.198  END TEST nvme_rpc_timeouts
00:17:02.198  ************************************
00:17:02.198   00:47:51	-- spdk/autotest.sh@238 -- # '[' 0 -eq 0 ']'
00:17:02.198    00:47:51	-- spdk/autotest.sh@238 -- # uname -s
00:17:02.198   00:47:51	-- spdk/autotest.sh@238 -- # '[' Linux = Linux ']'
00:17:02.198   00:47:51	-- spdk/autotest.sh@239 -- # run_test sw_hotplug /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh
00:17:02.198   00:47:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:17:02.198   00:47:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:17:02.198   00:47:51	-- common/autotest_common.sh@10 -- # set +x
00:17:02.198  ************************************
00:17:02.198  START TEST sw_hotplug
00:17:02.198  ************************************
00:17:02.198   00:47:51	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh
00:17:02.454  * Looking for test storage...
00:17:02.454  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:17:02.454    00:47:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:17:02.454     00:47:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:17:02.454     00:47:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:17:02.454    00:47:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:17:02.454    00:47:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:17:02.454    00:47:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:17:02.454    00:47:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:17:02.454    00:47:51	-- scripts/common.sh@335 -- # IFS=.-:
00:17:02.454    00:47:51	-- scripts/common.sh@335 -- # read -ra ver1
00:17:02.454    00:47:51	-- scripts/common.sh@336 -- # IFS=.-:
00:17:02.454    00:47:51	-- scripts/common.sh@336 -- # read -ra ver2
00:17:02.454    00:47:51	-- scripts/common.sh@337 -- # local 'op=<'
00:17:02.454    00:47:51	-- scripts/common.sh@339 -- # ver1_l=2
00:17:02.454    00:47:51	-- scripts/common.sh@340 -- # ver2_l=1
00:17:02.454    00:47:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:17:02.454    00:47:51	-- scripts/common.sh@343 -- # case "$op" in
00:17:02.454    00:47:51	-- scripts/common.sh@344 -- # : 1
00:17:02.454    00:47:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:17:02.454    00:47:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:02.454     00:47:51	-- scripts/common.sh@364 -- # decimal 1
00:17:02.454     00:47:51	-- scripts/common.sh@352 -- # local d=1
00:17:02.454     00:47:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:02.454     00:47:51	-- scripts/common.sh@354 -- # echo 1
00:17:02.454    00:47:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:17:02.454     00:47:51	-- scripts/common.sh@365 -- # decimal 2
00:17:02.454     00:47:51	-- scripts/common.sh@352 -- # local d=2
00:17:02.454     00:47:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:02.454     00:47:51	-- scripts/common.sh@354 -- # echo 2
00:17:02.454    00:47:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:17:02.454    00:47:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:17:02.454    00:47:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:17:02.454    00:47:51	-- scripts/common.sh@367 -- # return 0
00:17:02.454    00:47:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:02.454    00:47:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:17:02.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:02.454  		--rc genhtml_branch_coverage=1
00:17:02.454  		--rc genhtml_function_coverage=1
00:17:02.454  		--rc genhtml_legend=1
00:17:02.454  		--rc geninfo_all_blocks=1
00:17:02.454  		--rc geninfo_unexecuted_blocks=1
00:17:02.454  		
00:17:02.454  		'
00:17:02.454    00:47:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:17:02.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:02.454  		--rc genhtml_branch_coverage=1
00:17:02.454  		--rc genhtml_function_coverage=1
00:17:02.454  		--rc genhtml_legend=1
00:17:02.454  		--rc geninfo_all_blocks=1
00:17:02.454  		--rc geninfo_unexecuted_blocks=1
00:17:02.454  		
00:17:02.454  		'
00:17:02.454    00:47:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:17:02.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:02.454  		--rc genhtml_branch_coverage=1
00:17:02.454  		--rc genhtml_function_coverage=1
00:17:02.454  		--rc genhtml_legend=1
00:17:02.454  		--rc geninfo_all_blocks=1
00:17:02.454  		--rc geninfo_unexecuted_blocks=1
00:17:02.454  		
00:17:02.454  		'
00:17:02.454    00:47:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:17:02.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:02.454  		--rc genhtml_branch_coverage=1
00:17:02.454  		--rc genhtml_function_coverage=1
00:17:02.454  		--rc genhtml_legend=1
00:17:02.454  		--rc geninfo_all_blocks=1
00:17:02.454  		--rc geninfo_unexecuted_blocks=1
00:17:02.454  		
00:17:02.454  		'
00:17:02.454   00:47:51	-- nvme/sw_hotplug.sh@122 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:17:05.738  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:17:05.738  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:17:05.738  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:17:05.738   00:47:54	-- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6
00:17:05.738   00:47:54	-- nvme/sw_hotplug.sh@125 -- # hotplug_events=3
00:17:05.738   00:47:54	-- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace))
00:17:05.738    00:47:54	-- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace
00:17:05.738    00:47:54	-- scripts/common.sh@311 -- # local bdf bdfs
00:17:05.738    00:47:54	-- scripts/common.sh@312 -- # local nvmes
00:17:05.738    00:47:54	-- scripts/common.sh@314 -- # [[ -n '' ]]
00:17:05.738    00:47:54	-- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:17:05.738     00:47:54	-- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02
00:17:05.738     00:47:54	-- scripts/common.sh@297 -- # local bdf=
00:17:05.738      00:47:54	-- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02
00:17:05.738      00:47:54	-- scripts/common.sh@232 -- # local class
00:17:05.738      00:47:54	-- scripts/common.sh@233 -- # local subclass
00:17:05.738      00:47:54	-- scripts/common.sh@234 -- # local progif
00:17:05.738       00:47:54	-- scripts/common.sh@235 -- # printf %02x 1
00:17:05.738      00:47:54	-- scripts/common.sh@235 -- # class=01
00:17:05.739       00:47:54	-- scripts/common.sh@236 -- # printf %02x 8
00:17:05.739      00:47:54	-- scripts/common.sh@236 -- # subclass=08
00:17:05.739       00:47:54	-- scripts/common.sh@237 -- # printf %02x 2
00:17:05.739      00:47:54	-- scripts/common.sh@237 -- # progif=02
00:17:05.739      00:47:54	-- scripts/common.sh@239 -- # hash lspci
00:17:05.739      00:47:54	-- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']'
00:17:05.739      00:47:54	-- scripts/common.sh@241 -- # lspci -mm -n -D
00:17:05.739      00:47:54	-- scripts/common.sh@242 -- # grep -i -- -p02
00:17:05.739      00:47:54	-- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:17:05.739      00:47:54	-- scripts/common.sh@244 -- # tr -d '"'
00:17:05.739     00:47:54	-- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@")
00:17:05.739     00:47:54	-- scripts/common.sh@300 -- # pci_can_use 0000:5e:00.0
00:17:05.739     00:47:54	-- scripts/common.sh@15 -- # local i
00:17:05.739     00:47:54	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:17:05.739     00:47:54	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:17:05.739     00:47:54	-- scripts/common.sh@24 -- # return 0
00:17:05.739     00:47:54	-- scripts/common.sh@301 -- # echo 0000:5e:00.0
00:17:05.739    00:47:54	-- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}"
00:17:05.739    00:47:54	-- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]]
00:17:05.739     00:47:54	-- scripts/common.sh@322 -- # uname -s
00:17:05.739    00:47:54	-- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]]
00:17:05.739    00:47:54	-- scripts/common.sh@325 -- # bdfs+=("$bdf")
00:17:05.739    00:47:54	-- scripts/common.sh@327 -- # (( 1 ))
00:17:05.739    00:47:54	-- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0
00:17:05.739   00:47:54	-- nvme/sw_hotplug.sh@127 -- # nvme_count=1
00:17:05.739   00:47:54	-- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}")
00:17:05.739   00:47:54	-- nvme/sw_hotplug.sh@130 -- # xtrace_disable
00:17:05.739   00:47:54	-- common/autotest_common.sh@10 -- # set +x
00:17:09.024   00:47:57	-- nvme/sw_hotplug.sh@135 -- # run_hotplug
00:17:09.024   00:47:57	-- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:17:09.024   00:47:57	-- nvme/sw_hotplug.sh@73 -- # hotplug_pid=1035183
00:17:09.024   00:47:57	-- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false
00:17:09.024   00:47:57	-- nvme/sw_hotplug.sh@14 -- # local helper_time=0
00:17:09.024   00:47:57	-- nvme/sw_hotplug.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning
00:17:09.024    00:47:57	-- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false
00:17:09.024    00:47:57	-- common/autotest_common.sh@708 -- # [[ -t 0 ]]
00:17:09.024    00:47:57	-- common/autotest_common.sh@708 -- # exec
00:17:09.024    00:47:57	-- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R
00:17:09.024     00:47:57	-- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 false
00:17:09.024     00:47:57	-- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3
00:17:09.024     00:47:57	-- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6
00:17:09.024     00:47:57	-- nvme/sw_hotplug.sh@24 -- # local use_bdev=false
00:17:09.024     00:47:57	-- nvme/sw_hotplug.sh@25 -- # local dev bdfs
00:17:09.024     00:47:57	-- nvme/sw_hotplug.sh@31 -- # sleep 6
00:17:09.024  EAL: No free 2048 kB hugepages reported on node 1
00:17:09.024  Initializing NVMe Controllers
00:17:09.958  Attaching to 0000:5e:00.0
00:17:11.862  Attached to 0000:5e:00.0
00:17:11.862  Initialization complete. Starting I/O...
00:17:11.862  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        128 I/Os completed (+128)
00:17:11.862  
00:17:12.797  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       3456 I/Os completed (+3328)
00:17:12.797  
00:17:14.173  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       6912 I/Os completed (+3456)
00:17:14.173  
00:17:14.741     00:48:03	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:14.741     00:48:03	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:14.741     00:48:03	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:14.741  [2024-12-17 00:48:03.943775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:14.741  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:14.741  [2024-12-17 00:48:03.943836] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  [2024-12-17 00:48:03.943860] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  [2024-12-17 00:48:03.943874] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  [2024-12-17 00:48:03.943893] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:14.741  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:14.741  [2024-12-17 00:48:03.945126] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  [2024-12-17 00:48:03.945153] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  [2024-12-17 00:48:03.945168] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741  [2024-12-17 00:48:03.945182] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:14.741     00:48:03	-- nvme/sw_hotplug.sh@38 -- # false
00:17:14.741     00:48:03	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:17:14.741  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:5e:00.0/vendor
00:17:14.741  EAL: Scan for (pci) bus failed.
00:17:15.001  
00:17:15.001     00:48:04	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:17:15.001     00:48:04	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:17:15.001     00:48:04	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:17:15.937  
00:17:16.872  
00:17:17.809  
00:17:18.378     00:48:07	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:17:18.378     00:48:07	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:17:18.378     00:48:07	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:17:18.946  Attaching to 0000:5e:00.0
00:17:21.480  Attached to 0000:5e:00.0
00:17:21.480  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):          0 I/Os completed (+0)
00:17:21.480  
00:17:21.480  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        128 I/Os completed (+128)
00:17:21.480  
00:17:21.480  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        256 I/Os completed (+128)
00:17:21.480  
00:17:22.047  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       2944 I/Os completed (+2688)
00:17:22.047  
00:17:22.984  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       6400 I/Os completed (+3456)
00:17:22.984  
00:17:23.920  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       9856 I/Os completed (+3456)
00:17:23.920  
00:17:24.179     00:48:13	-- nvme/sw_hotplug.sh@56 -- # false
00:17:24.179     00:48:13	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:24.179     00:48:13	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:24.179     00:48:13	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:24.179  [2024-12-17 00:48:13.418567] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:24.179  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:24.179  [2024-12-17 00:48:13.418602] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  [2024-12-17 00:48:13.418624] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  [2024-12-17 00:48:13.418638] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  [2024-12-17 00:48:13.418651] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:24.179  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:24.179  [2024-12-17 00:48:13.419743] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  [2024-12-17 00:48:13.419767] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  [2024-12-17 00:48:13.419782] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179  [2024-12-17 00:48:13.419796] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:24.179     00:48:13	-- nvme/sw_hotplug.sh@38 -- # false
00:17:24.179     00:48:13	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:17:24.438     00:48:13	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:17:24.438     00:48:13	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:17:24.438     00:48:13	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:17:25.005  
00:17:25.944  
00:17:26.882  
00:17:27.818     00:48:16	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:17:27.818     00:48:16	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:17:27.818     00:48:16	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:17:28.394  Attaching to 0000:5e:00.0
00:17:30.924  Attached to 0000:5e:00.0
00:17:30.924  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):          0 I/Os completed (+0)
00:17:30.924  
00:17:30.924  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        128 I/Os completed (+128)
00:17:30.924  
00:17:30.924  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        256 I/Os completed (+128)
00:17:30.924  
00:17:30.924  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       1408 I/Os completed (+1152)
00:17:30.924  
00:17:31.860  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       4864 I/Os completed (+3456)
00:17:31.860  
00:17:33.280  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       8320 I/Os completed (+3456)
00:17:33.280  
00:17:33.908     00:48:22	-- nvme/sw_hotplug.sh@56 -- # false
00:17:33.908     00:48:22	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:33.908     00:48:22	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:33.908     00:48:22	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:33.908  [2024-12-17 00:48:22.888308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:33.908  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:33.908  [2024-12-17 00:48:22.888347] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  [2024-12-17 00:48:22.888370] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  [2024-12-17 00:48:22.888384] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  [2024-12-17 00:48:22.888397] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:33.908  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:33.908  [2024-12-17 00:48:22.889610] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  [2024-12-17 00:48:22.889635] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  [2024-12-17 00:48:22.889651] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908  [2024-12-17 00:48:22.889665] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:33.908     00:48:22	-- nvme/sw_hotplug.sh@38 -- # false
00:17:33.908     00:48:22	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:17:33.908     00:48:23	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:17:33.908     00:48:23	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:17:33.908     00:48:23	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:17:33.908  
00:17:34.863  
00:17:36.235  
00:17:37.167  
00:17:37.167     00:48:26	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:17:37.167     00:48:26	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:17:37.167     00:48:26	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:17:38.102  Attaching to 0000:5e:00.0
00:17:40.000  Attached to 0000:5e:00.0
00:17:40.000  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:17:43.278     00:48:32	-- nvme/sw_hotplug.sh@56 -- # false
00:17:43.278     00:48:32	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:43.278    00:48:32	-- common/autotest_common.sh@716 -- # time=34.42
00:17:43.278    00:48:32	-- common/autotest_common.sh@718 -- # echo 34.42
00:17:43.278   00:48:32	-- nvme/sw_hotplug.sh@16 -- # helper_time=34.42
00:17:43.278   00:48:32	-- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 34.42 1
00:17:43.278  remove_attach_helper took 34.42s to complete (handling 1 nvme drive(s)) 00:48:32	-- nvme/sw_hotplug.sh@79 -- # sleep 6
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@81 -- # kill -0 1035183
00:17:49.838  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (1035183) - No such process
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@83 -- # wait 1035183
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@95 -- # local dev
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=1040084
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:17:49.838   00:48:38	-- nvme/sw_hotplug.sh@101 -- # waitforlisten 1040084
00:17:49.838   00:48:38	-- common/autotest_common.sh@829 -- # '[' -z 1040084 ']'
00:17:49.838   00:48:38	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:49.838   00:48:38	-- common/autotest_common.sh@834 -- # local max_retries=100
00:17:49.838   00:48:38	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:49.838  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:49.838   00:48:38	-- common/autotest_common.sh@838 -- # xtrace_disable
00:17:49.838   00:48:38	-- common/autotest_common.sh@10 -- # set +x
00:17:49.838  [2024-12-17 00:48:38.379796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:17:49.838  [2024-12-17 00:48:38.379866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040084 ]
00:17:49.838  EAL: No free 2048 kB hugepages reported on node 1
00:17:49.838  [2024-12-17 00:48:38.475696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:49.838  [2024-12-17 00:48:38.528905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:17:49.838  [2024-12-17 00:48:38.529075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:17:49.838  [2024-12-17 00:48:38.698077] 'OCF_Core' volume operations registered
00:17:49.838  [2024-12-17 00:48:38.700523] 'OCF_Cache' volume operations registered
00:17:49.838  [2024-12-17 00:48:38.703445] 'OCF Composite' volume operations registered
00:17:49.838  [2024-12-17 00:48:38.705901] 'SPDK_block_device' volume operations registered
00:17:50.096   00:48:39	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:17:50.096   00:48:39	-- common/autotest_common.sh@862 -- # return 0
00:17:50.096   00:48:39	-- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}"
00:17:50.096   00:48:39	-- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:5e:00.0
00:17:50.096   00:48:39	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.096   00:48:39	-- common/autotest_common.sh@10 -- # set +x
00:17:53.377  Nvme00n1
00:17:53.377   00:48:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.377   00:48:42	-- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6
00:17:53.377   00:48:42	-- common/autotest_common.sh@897 -- # local bdev_name=Nvme00n1
00:17:53.377   00:48:42	-- common/autotest_common.sh@898 -- # local bdev_timeout=6
00:17:53.377   00:48:42	-- common/autotest_common.sh@899 -- # local i
00:17:53.377   00:48:42	-- common/autotest_common.sh@900 -- # [[ -z 6 ]]
00:17:53.377   00:48:42	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:17:53.377   00:48:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.377   00:48:42	-- common/autotest_common.sh@10 -- # set +x
00:17:53.377   00:48:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.377   00:48:42	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6
00:17:53.377   00:48:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.377   00:48:42	-- common/autotest_common.sh@10 -- # set +x
00:17:53.377  [
00:17:53.377  {
00:17:53.377  "name": "Nvme00n1",
00:17:53.377  "aliases": [
00:17:53.377  "fa6b5bc0-eb17-4473-abc7-b729cecd1405"
00:17:53.377  ],
00:17:53.377  "product_name": "NVMe disk",
00:17:53.377  "block_size": 512,
00:17:53.377  "num_blocks": 7814037168,
00:17:53.377  "uuid": "fa6b5bc0-eb17-4473-abc7-b729cecd1405",
00:17:53.377  "assigned_rate_limits": {
00:17:53.377  "rw_ios_per_sec": 0,
00:17:53.377  "rw_mbytes_per_sec": 0,
00:17:53.377  "r_mbytes_per_sec": 0,
00:17:53.377  "w_mbytes_per_sec": 0
00:17:53.377  },
00:17:53.377  "claimed": false,
00:17:53.377  "zoned": false,
00:17:53.377  "supported_io_types": {
00:17:53.377  "read": true,
00:17:53.377  "write": true,
00:17:53.377  "unmap": true,
00:17:53.377  "write_zeroes": true,
00:17:53.377  "flush": true,
00:17:53.377  "reset": true,
00:17:53.377  "compare": false,
00:17:53.377  "compare_and_write": false,
00:17:53.377  "abort": true,
00:17:53.377  "nvme_admin": true,
00:17:53.377  "nvme_io": true
00:17:53.377  },
00:17:53.377  "driver_specific": {
00:17:53.377  "nvme": [
00:17:53.377  {
00:17:53.377  "pci_address": "0000:5e:00.0",
00:17:53.377  "trid": {
00:17:53.377  "trtype": "PCIe",
00:17:53.377  "traddr": "0000:5e:00.0"
00:17:53.377  },
00:17:53.377  "ctrlr_data": {
00:17:53.377  "cntlid": 0,
00:17:53.377  "vendor_id": "0x8086",
00:17:53.377  "model_number": "INTEL SSDPE2KX040T8",
00:17:53.377  "serial_number": "BTLJ83030AK84P0DGN",
00:17:53.377  "firmware_revision": "VDV10184",
00:17:53.377  "oacs": {
00:17:53.377  "security": 0,
00:17:53.377  "format": 1,
00:17:53.377  "firmware": 1,
00:17:53.377  "ns_manage": 1
00:17:53.377  },
00:17:53.377  "multi_ctrlr": false,
00:17:53.377  "ana_reporting": false
00:17:53.377  },
00:17:53.377  "vs": {
00:17:53.377  "nvme_version": "1.2"
00:17:53.377  },
00:17:53.377  "ns_data": {
00:17:53.377  "id": 1,
00:17:53.377  "can_share": false
00:17:53.377  }
00:17:53.377  }
00:17:53.377  ],
00:17:53.377  "mp_policy": "active_passive"
00:17:53.377  }
00:17:53.377  }
00:17:53.377  ]
00:17:53.377   00:48:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.377   00:48:42	-- common/autotest_common.sh@905 -- # return 0
00:17:53.377   00:48:42	-- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:17:53.377   00:48:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:53.377   00:48:42	-- common/autotest_common.sh@10 -- # set +x
00:17:53.377   00:48:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:53.377   00:48:42	-- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true
00:17:53.377   00:48:42	-- nvme/sw_hotplug.sh@14 -- # local helper_time=0
00:17:53.377    00:48:42	-- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true
00:17:53.377    00:48:42	-- common/autotest_common.sh@708 -- # [[ -t 0 ]]
00:17:53.377    00:48:42	-- common/autotest_common.sh@708 -- # exec
00:17:53.377    00:48:42	-- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R
00:17:53.377     00:48:42	-- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true
00:17:53.377     00:48:42	-- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3
00:17:53.377     00:48:42	-- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6
00:17:53.377     00:48:42	-- nvme/sw_hotplug.sh@24 -- # local use_bdev=true
00:17:53.377     00:48:42	-- nvme/sw_hotplug.sh@25 -- # local dev bdfs
00:17:53.377     00:48:42	-- nvme/sw_hotplug.sh@31 -- # sleep 6
00:17:59.939     00:48:48	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:59.939     00:48:48	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:59.939     00:48:48	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:59.939  [2024-12-17 00:48:48.117351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:59.939  [2024-12-17 00:48:48.117471] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:59.939  [2024-12-17 00:48:48.117495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:59.939  [2024-12-17 00:48:48.117512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:59.939  [2024-12-17 00:48:48.117534] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:59.939  [2024-12-17 00:48:48.117547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:59.939  [2024-12-17 00:48:48.117561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:59.939  [2024-12-17 00:48:48.117581] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:59.939  [2024-12-17 00:48:48.117594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:59.939  [2024-12-17 00:48:48.117607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:59.939  [2024-12-17 00:48:48.117622] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:59.939  [2024-12-17 00:48:48.117634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:59.939  [2024-12-17 00:48:48.117647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:59.939     00:48:48	-- nvme/sw_hotplug.sh@38 -- # true
00:17:59.939     00:48:48	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:05.206      00:48:54	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:05.206      00:48:54	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:05.206      00:48:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:05.206      00:48:54	-- common/autotest_common.sh@10 -- # set +x
00:18:05.206      00:48:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:05.206     00:48:54	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:05.206     00:48:54	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:05.206     00:48:54	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:05.206     00:48:54	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:05.206     00:48:54	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:18:08.492     00:48:57	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:18:08.492     00:48:57	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:18:08.492     00:48:57	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@56 -- # true
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:18:15.061      00:49:03	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:18:15.061      00:49:03	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:15.061      00:49:03	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:18:15.061      00:49:03	-- common/autotest_common.sh@10 -- # set +x
00:18:15.061      00:49:03	-- nvme/sw_hotplug.sh@58 -- # sort
00:18:15.061      00:49:03	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:18:15.061  [2024-12-17 00:49:03.726816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:18:15.061  [2024-12-17 00:49:03.726930] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:15.061  [2024-12-17 00:49:03.726955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:15.061  [2024-12-17 00:49:03.726971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:15.061  [2024-12-17 00:49:03.726993] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:15.061  [2024-12-17 00:49:03.727005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:15.061  [2024-12-17 00:49:03.727019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:15.061  [2024-12-17 00:49:03.727033] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:15.061  [2024-12-17 00:49:03.727045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:15.061  [2024-12-17 00:49:03.727058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:15.061  [2024-12-17 00:49:03.727072] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:15.061  [2024-12-17 00:49:03.727090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:15.061  [2024-12-17 00:49:03.727104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@38 -- # true
00:18:15.061     00:49:03	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:21.638      00:49:09	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:21.638      00:49:09	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:21.638      00:49:09	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:21.638      00:49:09	-- common/autotest_common.sh@10 -- # set +x
00:18:21.638      00:49:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:21.638     00:49:09	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:21.638     00:49:09	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:21.638     00:49:09	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:21.638     00:49:09	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:21.638     00:49:09	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:18:24.169     00:49:13	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:18:24.169     00:49:13	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:18:24.169     00:49:13	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@56 -- # true
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:18:30.740      00:49:19	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:18:30.740      00:49:19	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:30.740      00:49:19	-- common/autotest_common.sh@10 -- # set +x
00:18:30.740      00:49:19	-- nvme/sw_hotplug.sh@58 -- # sort
00:18:30.740      00:49:19	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:18:30.740      00:49:19	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:18:30.740  [2024-12-17 00:49:19.334845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:18:30.740  [2024-12-17 00:49:19.334972] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:30.740  [2024-12-17 00:49:19.334996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:30.740  [2024-12-17 00:49:19.335013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:30.740  [2024-12-17 00:49:19.335034] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:30.740  [2024-12-17 00:49:19.335047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:30.740  [2024-12-17 00:49:19.335061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:30.740  [2024-12-17 00:49:19.335075] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:30.740  [2024-12-17 00:49:19.335087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:30.740  [2024-12-17 00:49:19.335100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:30.740  [2024-12-17 00:49:19.335114] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:30.740  [2024-12-17 00:49:19.335126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:30.740  [2024-12-17 00:49:19.335139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@38 -- # true
00:18:30.740     00:49:19	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:37.287      00:49:25	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:37.287      00:49:25	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:37.287      00:49:25	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:37.287      00:49:25	-- common/autotest_common.sh@10 -- # set +x
00:18:37.287      00:49:25	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:37.287     00:49:25	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:37.287     00:49:25	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:37.287     00:49:25	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:37.287     00:49:25	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:37.287     00:49:25	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:18:39.811     00:49:28	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:18:39.811     00:49:28	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:18:39.811     00:49:28	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@56 -- # true
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:18:46.360      00:49:34	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:18:46.360      00:49:34	-- nvme/sw_hotplug.sh@58 -- # sort
00:18:46.360      00:49:34	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:18:46.360      00:49:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:46.360      00:49:34	-- common/autotest_common.sh@10 -- # set +x
00:18:46.360      00:49:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:46.360    00:49:34	-- common/autotest_common.sh@716 -- # time=52.83
00:18:46.360    00:49:34	-- common/autotest_common.sh@718 -- # echo 52.83
00:18:46.360   00:49:34	-- nvme/sw_hotplug.sh@16 -- # helper_time=52.83
00:18:46.360   00:49:34	-- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 52.83 1
00:18:46.360  remove_attach_helper took 52.83s to complete (handling 1 nvme drive(s)) 00:49:34	-- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:18:46.360   00:49:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:46.360   00:49:34	-- common/autotest_common.sh@10 -- # set +x
00:18:46.360   00:49:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:46.360   00:49:34	-- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:18:46.360   00:49:34	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:46.360   00:49:34	-- common/autotest_common.sh@10 -- # set +x
00:18:46.360   00:49:34	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:46.360   00:49:34	-- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true
00:18:46.360   00:49:34	-- nvme/sw_hotplug.sh@14 -- # local helper_time=0
00:18:46.360    00:49:34	-- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true
00:18:46.360    00:49:34	-- common/autotest_common.sh@708 -- # [[ -t 0 ]]
00:18:46.360    00:49:34	-- common/autotest_common.sh@708 -- # exec
00:18:46.360    00:49:34	-- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R
00:18:46.360     00:49:34	-- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@24 -- # local use_bdev=true
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@25 -- # local dev bdfs
00:18:46.360     00:49:34	-- nvme/sw_hotplug.sh@31 -- # sleep 6
00:18:52.913     00:49:40	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:52.913     00:49:40	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:18:52.913     00:49:40	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:18:52.913  [2024-12-17 00:49:41.046054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:18:52.913  [2024-12-17 00:49:41.046171] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:52.913  [2024-12-17 00:49:41.046195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:52.913  [2024-12-17 00:49:41.046213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:52.913  [2024-12-17 00:49:41.046235] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:52.913  [2024-12-17 00:49:41.046247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:52.913  [2024-12-17 00:49:41.046268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:52.913  [2024-12-17 00:49:41.046284] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:52.913  [2024-12-17 00:49:41.046296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:52.913  [2024-12-17 00:49:41.046309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:52.913  [2024-12-17 00:49:41.046324] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:52.913  [2024-12-17 00:49:41.046337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:52.913  [2024-12-17 00:49:41.046350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:52.913     00:49:41	-- nvme/sw_hotplug.sh@38 -- # true
00:18:52.913     00:49:41	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:58.283      00:49:47	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:58.283      00:49:47	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:58.283      00:49:47	-- common/autotest_common.sh@10 -- # set +x
00:18:58.283      00:49:47	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:58.283      00:49:47	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:58.283     00:49:47	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:58.283     00:49:47	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:58.283     00:49:47	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:58.283     00:49:47	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:58.283     00:49:47	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:19:01.567     00:49:50	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:19:01.567     00:49:50	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:19:01.567     00:49:50	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@56 -- # true
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:19:08.127      00:49:56	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:19:08.127      00:49:56	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:19:08.127      00:49:56	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:08.127      00:49:56	-- common/autotest_common.sh@10 -- # set +x
00:19:08.127      00:49:56	-- nvme/sw_hotplug.sh@58 -- # sort
00:19:08.127      00:49:56	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:19:08.127  [2024-12-17 00:49:56.650542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:19:08.127  [2024-12-17 00:49:56.650649] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:08.127  [2024-12-17 00:49:56.650672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:08.127  [2024-12-17 00:49:56.650690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:08.127  [2024-12-17 00:49:56.650710] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:08.127  [2024-12-17 00:49:56.650722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:08.127  [2024-12-17 00:49:56.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:08.127  [2024-12-17 00:49:56.650750] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:08.127  [2024-12-17 00:49:56.650763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:08.127  [2024-12-17 00:49:56.650776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:08.127  [2024-12-17 00:49:56.650796] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:08.127  [2024-12-17 00:49:56.650808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:08.127  [2024-12-17 00:49:56.650822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@38 -- # true
00:19:08.127     00:49:56	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:19:14.686      00:50:02	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:19:14.686      00:50:02	-- nvme/sw_hotplug.sh@40 -- # jq length
00:19:14.686      00:50:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:14.686      00:50:02	-- common/autotest_common.sh@10 -- # set +x
00:19:14.686      00:50:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:14.686     00:50:02	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:19:14.686     00:50:02	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:19:14.686     00:50:02	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:19:14.686     00:50:02	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:19:14.686     00:50:02	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:19:17.215     00:50:06	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:19:17.215     00:50:06	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:19:17.215     00:50:06	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:19:23.771     00:50:12	-- nvme/sw_hotplug.sh@56 -- # true
00:19:23.771     00:50:12	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:19:23.771      00:50:12	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:19:23.771      00:50:12	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:19:23.772      00:50:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:23.772      00:50:12	-- common/autotest_common.sh@10 -- # set +x
00:19:23.772      00:50:12	-- nvme/sw_hotplug.sh@58 -- # sort
00:19:23.772      00:50:12	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:23.772     00:50:12	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:19:23.772     00:50:12	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:19:23.772     00:50:12	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:19:23.772     00:50:12	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:19:23.772  [2024-12-17 00:50:12.254848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:19:23.772  [2024-12-17 00:50:12.254976] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:23.772  [2024-12-17 00:50:12.255001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:23.772  [2024-12-17 00:50:12.255018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:23.772  [2024-12-17 00:50:12.255041] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:23.772  [2024-12-17 00:50:12.255053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:23.772  [2024-12-17 00:50:12.255067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:23.772  [2024-12-17 00:50:12.255081] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:23.772  [2024-12-17 00:50:12.255093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:23.772  [2024-12-17 00:50:12.255107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:23.772  [2024-12-17 00:50:12.255120] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:19:23.772  [2024-12-17 00:50:12.255132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:23.772  [2024-12-17 00:50:12.255146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:23.772     00:50:12	-- nvme/sw_hotplug.sh@38 -- # true
00:19:23.772     00:50:12	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:19:29.027      00:50:18	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:19:29.027      00:50:18	-- nvme/sw_hotplug.sh@40 -- # jq length
00:19:29.027      00:50:18	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:29.027      00:50:18	-- common/autotest_common.sh@10 -- # set +x
00:19:29.284      00:50:18	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:29.284     00:50:18	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:19:29.284     00:50:18	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:19:29.284     00:50:18	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:19:29.284     00:50:18	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:19:29.284     00:50:18	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:19:32.559     00:50:21	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:19:32.559     00:50:21	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:19:32.559     00:50:21	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:19:39.108     00:50:27	-- nvme/sw_hotplug.sh@56 -- # true
00:19:39.108     00:50:27	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:19:39.108      00:50:27	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:19:39.108      00:50:27	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:39.108      00:50:27	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:19:39.108      00:50:27	-- common/autotest_common.sh@10 -- # set +x
00:19:39.108      00:50:27	-- nvme/sw_hotplug.sh@58 -- # sort
00:19:39.108      00:50:27	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:39.108     00:50:27	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:19:39.108     00:50:27	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:19:39.108    00:50:27	-- common/autotest_common.sh@716 -- # time=52.88
00:19:39.108    00:50:27	-- common/autotest_common.sh@718 -- # echo 52.88
00:19:39.108   00:50:27	-- nvme/sw_hotplug.sh@16 -- # helper_time=52.88
00:19:39.108   00:50:27	-- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 52.88 1
00:19:39.108  remove_attach_helper took 52.88s to complete (handling 1 nvme drive(s)) 00:50:27	-- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT
00:19:39.108   00:50:27	-- nvme/sw_hotplug.sh@118 -- # killprocess 1040084
00:19:39.108   00:50:27	-- common/autotest_common.sh@936 -- # '[' -z 1040084 ']'
00:19:39.108   00:50:27	-- common/autotest_common.sh@940 -- # kill -0 1040084
00:19:39.108    00:50:27	-- common/autotest_common.sh@941 -- # uname
00:19:39.108   00:50:27	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:39.108    00:50:27	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1040084
00:19:39.108   00:50:27	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:39.108   00:50:27	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:39.108   00:50:27	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1040084'
00:19:39.108  killing process with pid 1040084
00:19:39.108   00:50:27	-- common/autotest_common.sh@955 -- # kill 1040084
00:19:39.108   00:50:27	-- common/autotest_common.sh@960 -- # wait 1040084
00:19:43.293  
00:19:43.293  real	2m40.720s
00:19:43.293  user	1m45.399s
00:19:43.293  sys	0m42.386s
00:19:43.293   00:50:32	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:43.293   00:50:32	-- common/autotest_common.sh@10 -- # set +x
00:19:43.293  ************************************
00:19:43.293  END TEST sw_hotplug
00:19:43.293  ************************************
00:19:43.293   00:50:32	-- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]]
00:19:43.293   00:50:32	-- spdk/autotest.sh@251 -- # '[' 1 -eq 1 ']'
00:19:43.293   00:50:32	-- spdk/autotest.sh@252 -- # run_test ioat /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh
00:19:43.293   00:50:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:43.293   00:50:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:43.293   00:50:32	-- common/autotest_common.sh@10 -- # set +x
00:19:43.293  ************************************
00:19:43.293  START TEST ioat
00:19:43.293  ************************************
00:19:43.293   00:50:32	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh
00:19:43.293  * Looking for test storage...
00:19:43.293  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat
00:19:43.293    00:50:32	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:43.293     00:50:32	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:43.293     00:50:32	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:43.293    00:50:32	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:43.293    00:50:32	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:43.293    00:50:32	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:43.293    00:50:32	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:43.293    00:50:32	-- scripts/common.sh@335 -- # IFS=.-:
00:19:43.293    00:50:32	-- scripts/common.sh@335 -- # read -ra ver1
00:19:43.293    00:50:32	-- scripts/common.sh@336 -- # IFS=.-:
00:19:43.293    00:50:32	-- scripts/common.sh@336 -- # read -ra ver2
00:19:43.293    00:50:32	-- scripts/common.sh@337 -- # local 'op=<'
00:19:43.293    00:50:32	-- scripts/common.sh@339 -- # ver1_l=2
00:19:43.293    00:50:32	-- scripts/common.sh@340 -- # ver2_l=1
00:19:43.293    00:50:32	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:43.293    00:50:32	-- scripts/common.sh@343 -- # case "$op" in
00:19:43.293    00:50:32	-- scripts/common.sh@344 -- # : 1
00:19:43.293    00:50:32	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:43.293    00:50:32	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:43.293     00:50:32	-- scripts/common.sh@364 -- # decimal 1
00:19:43.293     00:50:32	-- scripts/common.sh@352 -- # local d=1
00:19:43.293     00:50:32	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:43.293     00:50:32	-- scripts/common.sh@354 -- # echo 1
00:19:43.293    00:50:32	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:43.293     00:50:32	-- scripts/common.sh@365 -- # decimal 2
00:19:43.293     00:50:32	-- scripts/common.sh@352 -- # local d=2
00:19:43.293     00:50:32	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:43.293     00:50:32	-- scripts/common.sh@354 -- # echo 2
00:19:43.293    00:50:32	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:43.293    00:50:32	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:43.293    00:50:32	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:43.293    00:50:32	-- scripts/common.sh@367 -- # return 0
00:19:43.293    00:50:32	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:43.293    00:50:32	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:43.293  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:43.293  		--rc genhtml_branch_coverage=1
00:19:43.293  		--rc genhtml_function_coverage=1
00:19:43.293  		--rc genhtml_legend=1
00:19:43.293  		--rc geninfo_all_blocks=1
00:19:43.293  		--rc geninfo_unexecuted_blocks=1
00:19:43.293  		
00:19:43.293  		'
00:19:43.293    00:50:32	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:43.293  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:43.293  		--rc genhtml_branch_coverage=1
00:19:43.293  		--rc genhtml_function_coverage=1
00:19:43.293  		--rc genhtml_legend=1
00:19:43.293  		--rc geninfo_all_blocks=1
00:19:43.293  		--rc geninfo_unexecuted_blocks=1
00:19:43.293  		
00:19:43.293  		'
00:19:43.294    00:50:32	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:43.294  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:43.294  		--rc genhtml_branch_coverage=1
00:19:43.294  		--rc genhtml_function_coverage=1
00:19:43.294  		--rc genhtml_legend=1
00:19:43.294  		--rc geninfo_all_blocks=1
00:19:43.294  		--rc geninfo_unexecuted_blocks=1
00:19:43.294  		
00:19:43.294  		'
00:19:43.294    00:50:32	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:43.294  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:43.294  		--rc genhtml_branch_coverage=1
00:19:43.294  		--rc genhtml_function_coverage=1
00:19:43.294  		--rc genhtml_legend=1
00:19:43.294  		--rc geninfo_all_blocks=1
00:19:43.294  		--rc geninfo_unexecuted_blocks=1
00:19:43.294  		
00:19:43.294  		'
00:19:43.294   00:50:32	-- ioat/ioat.sh@10 -- # run_test ioat_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1
00:19:43.294   00:50:32	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:19:43.294   00:50:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:43.294   00:50:32	-- common/autotest_common.sh@10 -- # set +x
00:19:43.294  ************************************
00:19:43.294  START TEST ioat_perf
00:19:43.294  ************************************
00:19:43.294   00:50:32	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1
00:19:43.294  EAL: No free 2048 kB hugepages reported on node 1
00:19:45.196  [2024-12-17 00:50:33.996403] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996468] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996482] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996493] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996505] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996515] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996530] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996541] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996552] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996562] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996573] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996584] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996594] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996605] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996616] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown!
00:19:45.196  [2024-12-17 00:50:33.996626] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown!
00:19:45.196   Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021
00:19:45.196   Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021
00:19:45.196  User configuration:
00:19:45.196  Number of channels:    1
00:19:45.196  Transfer size:  4096 bytes
00:19:45.196  Queue depth:    256
00:19:45.196  Run time:       1 seconds
00:19:45.196  Core mask:      0x1
00:19:45.196  Verify:         No
00:19:45.196  
00:19:45.196  Associating ioat_channel 0 with core 0
00:19:45.196  Starting thread on core 0
00:19:45.196  Channel_ID     Core     Transfers     Bandwidth     Failed
00:19:45.196  -----------------------------------------------------------
00:19:45.196           0         0      691200/s    2700 MiB/s          0
00:19:45.196  ===========================================================
00:19:45.196  Total:                    691200/s    2700 MiB/s          0
00:19:45.196  
00:19:45.196  real	0m1.662s
00:19:45.196  user	0m1.294s
00:19:45.196  sys	0m0.176s
00:19:45.196   00:50:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:45.196   00:50:34	-- common/autotest_common.sh@10 -- # set +x
00:19:45.196  ************************************
00:19:45.196  END TEST ioat_perf
00:19:45.196  ************************************
00:19:45.196   00:50:34	-- ioat/ioat.sh@12 -- # run_test ioat_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1
00:19:45.196   00:50:34	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:19:45.196   00:50:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:45.196   00:50:34	-- common/autotest_common.sh@10 -- # set +x
00:19:45.196  ************************************
00:19:45.196  START TEST ioat_verify
00:19:45.196  ************************************
00:19:45.196   00:50:34	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1
00:19:45.196  EAL: No free 2048 kB hugepages reported on node 1
00:19:46.570  [2024-12-17 00:50:35.771816] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771904] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771918] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771930] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771941] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771959] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771970] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771981] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.771992] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772003] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772013] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772025] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772035] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772046] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772056] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown!
00:19:46.570  [2024-12-17 00:50:35.772067] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown!
00:19:46.570   Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021
00:19:46.570   Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021
00:19:46.570  User configuration:
00:19:46.570  Run time:       1 seconds
00:19:46.570  Core mask:      0x1
00:19:46.570  Queue depth:    32
00:19:46.570  lcore = 0, copy success = 543, copy failed = 0, fill success = 544, fill failed = 0
00:19:46.570  
00:19:46.570  real	0m1.729s
00:19:46.570  user	0m1.366s
00:19:46.570  sys	0m0.169s
00:19:46.570   00:50:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:46.570   00:50:35	-- common/autotest_common.sh@10 -- # set +x
00:19:46.570  ************************************
00:19:46.570  END TEST ioat_verify
00:19:46.570  ************************************
00:19:46.570  
00:19:46.570  real	0m3.668s
00:19:46.570  user	0m2.808s
00:19:46.570  sys	0m0.506s
00:19:46.570   00:50:35	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:46.570   00:50:35	-- common/autotest_common.sh@10 -- # set +x
00:19:46.570  ************************************
00:19:46.570  END TEST ioat
00:19:46.570  ************************************
00:19:46.829   00:50:35	-- spdk/autotest.sh@255 -- # timing_exit lib
00:19:46.829   00:50:35	-- common/autotest_common.sh@728 -- # xtrace_disable
00:19:46.829   00:50:35	-- common/autotest_common.sh@10 -- # set +x
00:19:46.829   00:50:35	-- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@325 -- # '[' 1 -eq 1 ']'
00:19:46.829   00:50:35	-- spdk/autotest.sh@326 -- # run_test ocf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh
00:19:46.829   00:50:35	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:46.829   00:50:35	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:46.829   00:50:35	-- common/autotest_common.sh@10 -- # set +x
00:19:46.829  ************************************
00:19:46.829  START TEST ocf
00:19:46.829  ************************************
00:19:46.829   00:50:35	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh
00:19:46.829  * Looking for test storage...
00:19:46.829  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf
00:19:46.829    00:50:35	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:46.829     00:50:35	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:46.829     00:50:35	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:46.829    00:50:36	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:46.829    00:50:36	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:46.829    00:50:36	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:46.829    00:50:36	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:46.829    00:50:36	-- scripts/common.sh@335 -- # IFS=.-:
00:19:46.829    00:50:36	-- scripts/common.sh@335 -- # read -ra ver1
00:19:46.829    00:50:36	-- scripts/common.sh@336 -- # IFS=.-:
00:19:46.829    00:50:36	-- scripts/common.sh@336 -- # read -ra ver2
00:19:46.829    00:50:36	-- scripts/common.sh@337 -- # local 'op=<'
00:19:46.829    00:50:36	-- scripts/common.sh@339 -- # ver1_l=2
00:19:46.829    00:50:36	-- scripts/common.sh@340 -- # ver2_l=1
00:19:46.829    00:50:36	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:46.829    00:50:36	-- scripts/common.sh@343 -- # case "$op" in
00:19:46.829    00:50:36	-- scripts/common.sh@344 -- # : 1
00:19:46.829    00:50:36	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:46.829    00:50:36	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:46.829     00:50:36	-- scripts/common.sh@364 -- # decimal 1
00:19:46.829     00:50:36	-- scripts/common.sh@352 -- # local d=1
00:19:46.829     00:50:36	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:46.829     00:50:36	-- scripts/common.sh@354 -- # echo 1
00:19:46.829    00:50:36	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:46.829     00:50:36	-- scripts/common.sh@365 -- # decimal 2
00:19:46.829     00:50:36	-- scripts/common.sh@352 -- # local d=2
00:19:46.829     00:50:36	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:46.829     00:50:36	-- scripts/common.sh@354 -- # echo 2
00:19:46.829    00:50:36	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:46.829    00:50:36	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:46.829    00:50:36	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:46.829    00:50:36	-- scripts/common.sh@367 -- # return 0
00:19:46.829    00:50:36	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:46.829    00:50:36	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:46.829  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:46.829  		--rc genhtml_branch_coverage=1
00:19:46.829  		--rc genhtml_function_coverage=1
00:19:46.829  		--rc genhtml_legend=1
00:19:46.829  		--rc geninfo_all_blocks=1
00:19:46.829  		--rc geninfo_unexecuted_blocks=1
00:19:46.829  		
00:19:46.829  		'
00:19:46.829    00:50:36	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:46.829  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:46.829  		--rc genhtml_branch_coverage=1
00:19:46.829  		--rc genhtml_function_coverage=1
00:19:46.829  		--rc genhtml_legend=1
00:19:46.829  		--rc geninfo_all_blocks=1
00:19:46.829  		--rc geninfo_unexecuted_blocks=1
00:19:46.829  		
00:19:46.829  		'
00:19:46.829    00:50:36	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:46.829  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:46.829  		--rc genhtml_branch_coverage=1
00:19:46.829  		--rc genhtml_function_coverage=1
00:19:46.829  		--rc genhtml_legend=1
00:19:46.829  		--rc geninfo_all_blocks=1
00:19:46.829  		--rc geninfo_unexecuted_blocks=1
00:19:46.829  		
00:19:46.829  		'
00:19:46.829    00:50:36	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:46.829  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:46.829  		--rc genhtml_branch_coverage=1
00:19:46.829  		--rc genhtml_function_coverage=1
00:19:46.829  		--rc genhtml_legend=1
00:19:46.829  		--rc geninfo_all_blocks=1
00:19:46.829  		--rc geninfo_unexecuted_blocks=1
00:19:46.829  		
00:19:46.829  		'
00:19:46.829   00:50:36	-- ocf/ocf.sh@11 -- # run_test ocf_fio_modes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh
00:19:46.829   00:50:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:46.829   00:50:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:46.829   00:50:36	-- common/autotest_common.sh@10 -- # set +x
00:19:46.829  ************************************
00:19:46.829  START TEST ocf_fio_modes
00:19:46.829  ************************************
00:19:46.830   00:50:36	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh
00:19:47.087     00:50:36	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:47.087      00:50:36	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:47.087      00:50:36	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:47.087     00:50:36	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:47.087     00:50:36	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:47.087     00:50:36	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:47.087     00:50:36	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:47.087     00:50:36	-- scripts/common.sh@335 -- # IFS=.-:
00:19:47.087     00:50:36	-- scripts/common.sh@335 -- # read -ra ver1
00:19:47.087     00:50:36	-- scripts/common.sh@336 -- # IFS=.-:
00:19:47.087     00:50:36	-- scripts/common.sh@336 -- # read -ra ver2
00:19:47.087     00:50:36	-- scripts/common.sh@337 -- # local 'op=<'
00:19:47.087     00:50:36	-- scripts/common.sh@339 -- # ver1_l=2
00:19:47.087     00:50:36	-- scripts/common.sh@340 -- # ver2_l=1
00:19:47.087     00:50:36	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:47.087     00:50:36	-- scripts/common.sh@343 -- # case "$op" in
00:19:47.087     00:50:36	-- scripts/common.sh@344 -- # : 1
00:19:47.087     00:50:36	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:47.087     00:50:36	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:47.087      00:50:36	-- scripts/common.sh@364 -- # decimal 1
00:19:47.087      00:50:36	-- scripts/common.sh@352 -- # local d=1
00:19:47.087      00:50:36	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:47.087      00:50:36	-- scripts/common.sh@354 -- # echo 1
00:19:47.087     00:50:36	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:47.087      00:50:36	-- scripts/common.sh@365 -- # decimal 2
00:19:47.087      00:50:36	-- scripts/common.sh@352 -- # local d=2
00:19:47.087      00:50:36	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:47.087      00:50:36	-- scripts/common.sh@354 -- # echo 2
00:19:47.087     00:50:36	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:47.087     00:50:36	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:47.087     00:50:36	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:47.087     00:50:36	-- scripts/common.sh@367 -- # return 0
00:19:47.087     00:50:36	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:47.087     00:50:36	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:47.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:47.087  		--rc genhtml_branch_coverage=1
00:19:47.087  		--rc genhtml_function_coverage=1
00:19:47.087  		--rc genhtml_legend=1
00:19:47.087  		--rc geninfo_all_blocks=1
00:19:47.087  		--rc geninfo_unexecuted_blocks=1
00:19:47.087  		
00:19:47.087  		'
00:19:47.087     00:50:36	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:47.087  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:47.087  		--rc genhtml_branch_coverage=1
00:19:47.087  		--rc genhtml_function_coverage=1
00:19:47.087  		--rc genhtml_legend=1
00:19:47.088  		--rc geninfo_all_blocks=1
00:19:47.088  		--rc geninfo_unexecuted_blocks=1
00:19:47.088  		
00:19:47.088  		'
00:19:47.088     00:50:36	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:47.088  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:47.088  		--rc genhtml_branch_coverage=1
00:19:47.088  		--rc genhtml_function_coverage=1
00:19:47.088  		--rc genhtml_legend=1
00:19:47.088  		--rc geninfo_all_blocks=1
00:19:47.088  		--rc geninfo_unexecuted_blocks=1
00:19:47.088  		
00:19:47.088  		'
00:19:47.088     00:50:36	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:47.088  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:47.088  		--rc genhtml_branch_coverage=1
00:19:47.088  		--rc genhtml_function_coverage=1
00:19:47.088  		--rc genhtml_legend=1
00:19:47.088  		--rc geninfo_all_blocks=1
00:19:47.088  		--rc geninfo_unexecuted_blocks=1
00:19:47.088  		
00:19:47.088  		'
00:19:47.088    00:50:36	-- ocf/common.sh@9 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:19:47.088   00:50:36	-- integrity/fio-modes.sh@20 -- # clear_nvme
00:19:47.088   00:50:36	-- ocf/common.sh@12 -- # mapfile -t bdf
00:19:47.088    00:50:36	-- ocf/common.sh@12 -- # get_first_nvme_bdf
00:19:47.088    00:50:36	-- common/autotest_common.sh@1519 -- # bdfs=()
00:19:47.088    00:50:36	-- common/autotest_common.sh@1519 -- # local bdfs
00:19:47.088    00:50:36	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:19:47.088     00:50:36	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:19:47.088     00:50:36	-- common/autotest_common.sh@1508 -- # bdfs=()
00:19:47.088     00:50:36	-- common/autotest_common.sh@1508 -- # local bdfs
00:19:47.088     00:50:36	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:19:47.088      00:50:36	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:19:47.088      00:50:36	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:19:47.088     00:50:36	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:19:47.088     00:50:36	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:19:47.088    00:50:36	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:19:47.088   00:50:36	-- ocf/common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:19:50.371  Waiting for block devices as requested
00:19:50.371  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:19:50.629  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:19:50.629  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:19:50.629  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:19:50.888  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:19:50.888  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:19:50.888  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:19:51.147  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:19:51.147  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:19:51.147  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:19:51.406  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:19:51.406  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:19:51.406  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:19:51.664  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:19:51.664  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:19:51.664  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:19:51.922  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:19:51.922    00:50:40	-- ocf/common.sh@17 -- # get_nvme_name_from_bdf 0000:5e:00.0
00:19:51.922    00:50:40	-- common/autotest_common.sh@1476 -- # blkname=()
00:19:51.922     00:50:40	-- common/autotest_common.sh@1478 -- # lsblk -d --output NAME
00:19:51.922     00:50:40	-- common/autotest_common.sh@1478 -- # grep '^nvme'
00:19:51.922    00:50:41	-- common/autotest_common.sh@1478 -- # nvme_devs=nvme0n1
00:19:51.922    00:50:41	-- common/autotest_common.sh@1479 -- # '[' -z nvme0n1 ']'
00:19:51.922    00:50:41	-- common/autotest_common.sh@1482 -- # for dev in $nvme_devs
00:19:51.922     00:50:41	-- common/autotest_common.sh@1483 -- # readlink /sys/block/nvme0n1/device/device
00:19:51.922    00:50:41	-- common/autotest_common.sh@1483 -- # link_name=../../../0000:5e:00.0
00:19:51.922    00:50:41	-- common/autotest_common.sh@1484 -- # '[' -z ../../../0000:5e:00.0 ']'
00:19:51.922     00:50:41	-- common/autotest_common.sh@1487 -- # basename ../../../0000:5e:00.0
00:19:51.922    00:50:41	-- common/autotest_common.sh@1487 -- # bdf=0000:5e:00.0
00:19:51.922    00:50:41	-- common/autotest_common.sh@1488 -- # '[' 0000:5e:00.0 = 0000:5e:00.0 ']'
00:19:51.922    00:50:41	-- common/autotest_common.sh@1489 -- # blkname+=($dev)
00:19:51.922    00:50:41	-- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0n1
00:19:51.922   00:50:41	-- ocf/common.sh@17 -- # name=nvme0n1
00:19:51.922    00:50:41	-- ocf/common.sh@18 -- # lsblk /dev/nvme0n1 --output MOUNTPOINT -n
00:19:51.922    00:50:41	-- ocf/common.sh@18 -- # wc -w
00:19:51.922   00:50:41	-- ocf/common.sh@18 -- # mountpoints=0
00:19:51.922   00:50:41	-- ocf/common.sh@19 -- # '[' 0 '!=' 0 ']'
00:19:51.922   00:50:41	-- ocf/common.sh@22 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1000 oflag=direct
00:19:52.488  1000+0 records in
00:19:52.488  1000+0 records out
00:19:52.488  1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.481966 s, 2.2 GB/s
00:19:52.488   00:50:41	-- ocf/common.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:19:55.774  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:19:55.774  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:19:59.057  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:19:59.057   00:50:48	-- integrity/fio-modes.sh@22 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:19:59.057   00:50:48	-- integrity/fio-modes.sh@25 -- # xtrace_disable
00:19:59.057   00:50:48	-- common/autotest_common.sh@10 -- # set +x
00:19:59.316  {
00:19:59.316    "subsystems": [
00:19:59.316      {
00:19:59.316        "subsystem": "bdev",
00:19:59.316        "config": [
00:19:59.316          {
00:19:59.316            "method": "bdev_nvme_attach_controller",
00:19:59.316            "params": {
00:19:59.316              "trtype": "PCIe",
00:19:59.316              "name": "Nvme0",
00:19:59.316              "traddr": "0000:5e:00.0"
00:19:59.316            }
00:19:59.316          },
00:19:59.316          {
00:19:59.316            "method": "bdev_split_create",
00:19:59.316            "params": {
00:19:59.316              "base_bdev": "Nvme0n1",
00:19:59.316              "split_count": 8,
00:19:59.316              "split_size_mb": 101
00:19:59.316            }
00:19:59.316          },
00:19:59.316          {
00:19:59.316            "method": "bdev_ocf_create",
00:19:59.316            "params": {
00:19:59.316              "name": "PT_Nvme",
00:19:59.316              "mode": "pt",
00:19:59.316              "cache_bdev_name": "Nvme0n1p0",
00:19:59.316              "core_bdev_name": "Nvme0n1p1"
00:19:59.316            }
00:19:59.316          },
00:19:59.316          {
00:19:59.316            "method": "bdev_ocf_create",
00:19:59.316            "params": {
00:19:59.316              "name": "WT_Nvme",
00:19:59.316              "mode": "wt",
00:19:59.316              "cache_bdev_name": "Nvme0n1p2",
00:19:59.316              "core_bdev_name": "Nvme0n1p3"
00:19:59.316            }
00:19:59.316          },
00:19:59.316          {
00:19:59.316            "method": "bdev_ocf_create",
00:19:59.316            "params": {
00:19:59.316              "name": "WB_Nvme0",
00:19:59.316              "mode": "wb",
00:19:59.316              "cache_bdev_name": "Nvme0n1p4",
00:19:59.316              "core_bdev_name": "Nvme0n1p5"
00:19:59.316            }
00:19:59.316          },
00:19:59.316          {
00:19:59.316            "method": "bdev_ocf_create",
00:19:59.316            "params": {
00:19:59.316              "name": "WB_Nvme1",
00:19:59.316              "mode": "wb",
00:19:59.316              "cache_bdev_name": "Nvme0n1p6",
00:19:59.316              "core_bdev_name": "Nvme0n1p7"
00:19:59.316            }
00:19:59.316          },
00:19:59.316          {
00:19:59.316            "method": "bdev_wait_for_examine"
00:19:59.316          }
00:19:59.316        ]
00:19:59.316      }
00:19:59.316    ]
00:19:59.316  }
00:19:59.316   00:50:48	-- integrity/fio-modes.sh@100 -- # fio_verify --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:59.316   00:50:48	-- integrity/fio-modes.sh@12 -- # fio_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:59.316   00:50:48	-- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:59.316   00:50:48	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:19:59.316   00:50:48	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:19:59.316   00:50:48	-- common/autotest_common.sh@1328 -- # local sanitizers
00:19:59.316   00:50:48	-- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev
00:19:59.316   00:50:48	-- common/autotest_common.sh@1330 -- # shift
00:19:59.316   00:50:48	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:19:59.316   00:50:48	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:19:59.316    00:50:48	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev
00:19:59.316    00:50:48	-- common/autotest_common.sh@1334 -- # grep libasan
00:19:59.316    00:50:48	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:19:59.316   00:50:48	-- common/autotest_common.sh@1334 -- # asan_lib=
00:19:59.316   00:50:48	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:19:59.316   00:50:48	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:19:59.316    00:50:48	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev
00:19:59.316    00:50:48	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:19:59.316    00:50:48	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:19:59.316   00:50:48	-- common/autotest_common.sh@1334 -- # asan_lib=
00:19:59.316   00:50:48	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:19:59.316   00:50:48	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev'
00:19:59.316   00:50:48	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:59.573  randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:59.573  randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:59.573  write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:59.573  rw: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:59.573  randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:59.573  fio-3.35
00:19:59.573  Starting 5 threads
00:19:59.573  EAL: No free 2048 kB hugepages reported on node 1
00:20:14.441  
00:20:14.441  randwrite: (groupid=0, jobs=5): err= 0: pid=1057796: Tue Dec 17 00:51:02 2024
00:20:14.441    read: IOPS=29.4k, BW=115MiB/s (120MB/s)(1149MiB/10007msec)
00:20:14.441      slat (usec): min=3, max=406, avg=26.91, stdev=24.42
00:20:14.441      clat (usec): min=57, max=32278, avg=5512.56, stdev=3120.93
00:20:14.441       lat (usec): min=80, max=32300, avg=5539.47, stdev=3127.48
00:20:14.441      clat percentiles (usec):
00:20:14.441       |  1.00th=[  314],  5.00th=[  644], 10.00th=[ 1254], 20.00th=[ 2966],
00:20:14.441       | 30.00th=[ 3949], 40.00th=[ 4817], 50.00th=[ 5473], 60.00th=[ 6128],
00:20:14.441       | 70.00th=[ 6783], 80.00th=[ 7439], 90.00th=[ 8979], 95.00th=[10945],
00:20:14.441       | 99.00th=[15533], 99.50th=[16712], 99.90th=[20579], 99.95th=[22938],
00:20:14.441       | 99.99th=[30278]
00:20:14.441     bw (  KiB/s): min= 3584, max=57632, per=30.37%, avg=35695.16, stdev=5721.55, samples=80
00:20:14.441     iops        : min=  896, max=14408, avg=8923.76, stdev=1430.40, samples=80
00:20:14.441    write: IOPS=22.4k, BW=87.6MiB/s (91.8MB/s)(873MiB/9975msec); 0 zone resets
00:20:14.441      slat (usec): min=5, max=493, avg=25.90, stdev=19.04
00:20:14.441      clat (usec): min=30, max=98989, avg=7020.58, stdev=7528.35
00:20:14.441       lat (usec): min=54, max=99030, avg=7046.48, stdev=7535.64
00:20:14.441      clat percentiles (usec):
00:20:14.441       |  1.00th=[   66],  5.00th=[   84], 10.00th=[  117], 20.00th=[  219],
00:20:14.441       | 30.00th=[  889], 40.00th=[ 3589], 50.00th=[ 5800], 60.00th=[ 7570],
00:20:14.441       | 70.00th=[ 9241], 80.00th=[11338], 90.00th=[16319], 95.00th=[21627],
00:20:14.441       | 99.00th=[33424], 99.50th=[39060], 99.90th=[51643], 99.95th=[56361],
00:20:14.441       | 99.99th=[70779]
00:20:14.441     bw (  KiB/s): min=36888, max=143800, per=99.87%, avg=89533.84, stdev=6991.96, samples=95
00:20:14.441     iops        : min= 9222, max=35950, avg=22383.42, stdev=1748.00, samples=95
00:20:14.441    lat (usec)   : 50=0.03%, 100=3.25%, 250=5.94%, 500=3.58%, 750=3.05%
00:20:14.441    lat (usec)   : 1000=2.32%
00:20:14.441    lat (msec)   : 2=4.90%, 4=12.07%, 10=49.51%, 20=12.60%, 50=2.70%
00:20:14.441    lat (msec)   : 100=0.06%
00:20:14.441    cpu          : usr=99.56%, sys=0.01%, ctx=312, majf=0, minf=648
00:20:14.441    IO depths    : 1=6.3%, 2=5.2%, 4=5.4%, 8=7.8%, 16=9.6%, 32=17.2%, >=64=48.6%
00:20:14.441       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:20:14.441       complete  : 0=0.0%, 4=97.6%, 8=0.6%, 16=0.4%, 32=0.7%, 64=0.5%, >=64=0.2%
00:20:14.441       issued rwts: total=294078,223570,0,0 short=0,0,0,0 dropped=0,0,0,0
00:20:14.441       latency   : target=0, window=0, percentile=100.00%, depth=128
00:20:14.441  
00:20:14.441  Run status group 0 (all jobs):
00:20:14.441     READ: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1149MiB (1205MB), run=10007-10007msec
00:20:14.441    WRITE: bw=87.6MiB/s (91.8MB/s), 87.6MiB/s-87.6MiB/s (91.8MB/s-91.8MB/s), io=873MiB (916MB), run=9975-9975msec
00:20:19.870   00:51:08	-- integrity/fio-modes.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:20:19.870   00:51:08	-- integrity/fio-modes.sh@103 -- # cleanup
00:20:19.870   00:51:08	-- integrity/fio-modes.sh@16 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf
00:20:19.870  
00:20:19.870  real	0m32.768s
00:20:19.870  user	1m8.942s
00:20:19.870  sys	0m6.430s
00:20:19.870   00:51:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:19.870   00:51:08	-- common/autotest_common.sh@10 -- # set +x
00:20:19.870  ************************************
00:20:19.870  END TEST ocf_fio_modes
00:20:19.870  ************************************
00:20:19.870   00:51:08	-- ocf/ocf.sh@12 -- # run_test ocf_bdevperf_iotypes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh
00:20:19.870   00:51:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:19.870   00:51:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:19.870   00:51:08	-- common/autotest_common.sh@10 -- # set +x
00:20:19.870  ************************************
00:20:19.870  START TEST ocf_bdevperf_iotypes
00:20:19.870  ************************************
00:20:19.870   00:51:08	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh
00:20:19.870    00:51:08	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:19.870     00:51:08	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:19.870     00:51:08	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:19.870    00:51:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:19.870    00:51:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:19.870    00:51:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:19.870    00:51:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:19.870    00:51:09	-- scripts/common.sh@335 -- # IFS=.-:
00:20:19.870    00:51:09	-- scripts/common.sh@335 -- # read -ra ver1
00:20:19.870    00:51:09	-- scripts/common.sh@336 -- # IFS=.-:
00:20:19.870    00:51:09	-- scripts/common.sh@336 -- # read -ra ver2
00:20:19.870    00:51:09	-- scripts/common.sh@337 -- # local 'op=<'
00:20:19.870    00:51:09	-- scripts/common.sh@339 -- # ver1_l=2
00:20:19.870    00:51:09	-- scripts/common.sh@340 -- # ver2_l=1
00:20:19.870    00:51:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:19.870    00:51:09	-- scripts/common.sh@343 -- # case "$op" in
00:20:19.870    00:51:09	-- scripts/common.sh@344 -- # : 1
00:20:19.870    00:51:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:19.870    00:51:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:19.870     00:51:09	-- scripts/common.sh@364 -- # decimal 1
00:20:19.870     00:51:09	-- scripts/common.sh@352 -- # local d=1
00:20:19.870     00:51:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:19.870     00:51:09	-- scripts/common.sh@354 -- # echo 1
00:20:19.870    00:51:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:19.870     00:51:09	-- scripts/common.sh@365 -- # decimal 2
00:20:19.870     00:51:09	-- scripts/common.sh@352 -- # local d=2
00:20:19.870     00:51:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:19.870     00:51:09	-- scripts/common.sh@354 -- # echo 2
00:20:19.870    00:51:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:19.870    00:51:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:19.870    00:51:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:19.870    00:51:09	-- scripts/common.sh@367 -- # return 0
00:20:19.870    00:51:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:19.870    00:51:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:19.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.870  		--rc genhtml_branch_coverage=1
00:20:19.870  		--rc genhtml_function_coverage=1
00:20:19.870  		--rc genhtml_legend=1
00:20:19.870  		--rc geninfo_all_blocks=1
00:20:19.870  		--rc geninfo_unexecuted_blocks=1
00:20:19.870  		
00:20:19.870  		'
00:20:19.870    00:51:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:19.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.870  		--rc genhtml_branch_coverage=1
00:20:19.870  		--rc genhtml_function_coverage=1
00:20:19.870  		--rc genhtml_legend=1
00:20:19.870  		--rc geninfo_all_blocks=1
00:20:19.870  		--rc geninfo_unexecuted_blocks=1
00:20:19.870  		
00:20:19.870  		'
00:20:19.870    00:51:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:19.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.870  		--rc genhtml_branch_coverage=1
00:20:19.870  		--rc genhtml_function_coverage=1
00:20:19.870  		--rc genhtml_legend=1
00:20:19.870  		--rc geninfo_all_blocks=1
00:20:19.870  		--rc geninfo_unexecuted_blocks=1
00:20:19.870  		
00:20:19.870  		'
00:20:19.870    00:51:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:19.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:19.870  		--rc genhtml_branch_coverage=1
00:20:19.870  		--rc genhtml_function_coverage=1
00:20:19.870  		--rc genhtml_legend=1
00:20:19.870  		--rc geninfo_all_blocks=1
00:20:19.870  		--rc geninfo_unexecuted_blocks=1
00:20:19.870  		
00:20:19.870  		'
00:20:19.870   00:51:09	-- integrity/bdevperf-iotypes.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf
00:20:19.870   00:51:09	-- integrity/bdevperf-iotypes.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf
00:20:19.870   00:51:09	-- integrity/bdevperf-iotypes.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w flush
00:20:19.870    00:51:09	-- integrity/bdevperf-iotypes.sh@13 -- # gen_malloc_ocf_json
00:20:19.870    00:51:09	-- integrity/mallocs.conf@2 -- # local size=300
00:20:19.870    00:51:09	-- integrity/mallocs.conf@3 -- # local block_size=512
00:20:19.870    00:51:09	-- integrity/mallocs.conf@4 -- # local config
00:20:19.870    00:51:09	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:19.870  {
00:20:19.870    "method": "bdev_malloc_create",
00:20:19.870    "params": {
00:20:19.870      "name": "Malloc$malloc",
00:20:19.870      "num_blocks": $(( (size << 20) / block_size )),
00:20:19.870      "block_size": 512
00:20:19.870    }
00:20:19.870  }
00:20:19.870  JSON
00:20:19.870  )")
00:20:19.870     00:51:09	-- integrity/mallocs.conf@21 -- # cat
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:19.870  {
00:20:19.870    "method": "bdev_malloc_create",
00:20:19.870    "params": {
00:20:19.870      "name": "Malloc$malloc",
00:20:19.870      "num_blocks": $(( (size << 20) / block_size )),
00:20:19.870      "block_size": 512
00:20:19.870    }
00:20:19.870  }
00:20:19.870  JSON
00:20:19.870  )")
00:20:19.870     00:51:09	-- integrity/mallocs.conf@21 -- # cat
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:19.870  {
00:20:19.870    "method": "bdev_malloc_create",
00:20:19.870    "params": {
00:20:19.870      "name": "Malloc$malloc",
00:20:19.870      "num_blocks": $(( (size << 20) / block_size )),
00:20:19.870      "block_size": 512
00:20:19.870    }
00:20:19.870  }
00:20:19.870  JSON
00:20:19.870  )")
00:20:19.870     00:51:09	-- integrity/mallocs.conf@21 -- # cat
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:19.870    00:51:09	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:20:19.870    00:51:09	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:20:19.870    00:51:09	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:20:19.870    00:51:09	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:20:19.870    00:51:09	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:20:19.870    00:51:09	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:20:19.870    00:51:09	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:20:19.870    00:51:09	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:20:19.870    00:51:09	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:19.870    00:51:09	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:19.870  {
00:20:19.870    "method": "bdev_ocf_create",
00:20:19.870    "params": {
00:20:19.870      "name": "MalCache$ocf",
00:20:19.870      "mode": "${ocf_mode[ocf]}",
00:20:19.870      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:19.870      "core_bdev_name": "${ocf_core[ocf]}"
00:20:19.870    }
00:20:19.870  }
00:20:19.870  JSON
00:20:19.870  )")
00:20:19.870     00:51:09	-- integrity/mallocs.conf@44 -- # cat
00:20:19.870    00:51:09	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:19.870    00:51:09	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:19.870  {
00:20:19.870    "method": "bdev_ocf_create",
00:20:19.870    "params": {
00:20:19.870      "name": "MalCache$ocf",
00:20:19.870      "mode": "${ocf_mode[ocf]}",
00:20:19.870      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:19.871      "core_bdev_name": "${ocf_core[ocf]}"
00:20:19.871    }
00:20:19.871  }
00:20:19.871  JSON
00:20:19.871  )")
00:20:19.871     00:51:09	-- integrity/mallocs.conf@44 -- # cat
00:20:19.871  [2024-12-17 00:51:09.105287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:19.871  [2024-12-17 00:51:09.105361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060483 ]
00:20:19.871    00:51:09	-- integrity/mallocs.conf@47 -- # jq .
00:20:19.871     00:51:09	-- integrity/mallocs.conf@47 -- # IFS=,
00:20:19.871     00:51:09	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:20:19.871    "method": "bdev_malloc_create",
00:20:19.871    "params": {
00:20:19.871      "name": "Malloc0",
00:20:19.871      "num_blocks": 614400,
00:20:19.871      "block_size": 512
00:20:19.871    }
00:20:19.871  },{
00:20:19.871    "method": "bdev_malloc_create",
00:20:19.871    "params": {
00:20:19.871      "name": "Malloc1",
00:20:19.871      "num_blocks": 614400,
00:20:19.871      "block_size": 512
00:20:19.871    }
00:20:19.871  },{
00:20:19.871    "method": "bdev_malloc_create",
00:20:19.871    "params": {
00:20:19.871      "name": "Malloc2",
00:20:19.871      "num_blocks": 614400,
00:20:19.871      "block_size": 512
00:20:19.871    }
00:20:19.871  },{
00:20:19.871    "method": "bdev_ocf_create",
00:20:19.871    "params": {
00:20:19.871      "name": "MalCache1",
00:20:19.871      "mode": "wt",
00:20:19.871      "cache_bdev_name": "Malloc0",
00:20:19.871      "core_bdev_name": "Malloc1"
00:20:19.871    }
00:20:19.871  },{
00:20:19.871    "method": "bdev_ocf_create",
00:20:19.871    "params": {
00:20:19.871      "name": "MalCache2",
00:20:19.871      "mode": "pt",
00:20:19.871      "cache_bdev_name": "Malloc0",
00:20:19.871      "core_bdev_name": "Malloc2"
00:20:19.871    }
00:20:19.871  }'
00:20:20.130  EAL: No free 2048 kB hugepages reported on node 1
00:20:20.130  [2024-12-17 00:51:09.212414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:20.130  [2024-12-17 00:51:09.258830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:20.389  [2024-12-17 00:51:09.431338] 'OCF_Core' volume operations registered
00:20:20.389  [2024-12-17 00:51:09.433756] 'OCF_Cache' volume operations registered
00:20:20.389  [2024-12-17 00:51:09.436675] 'OCF Composite' volume operations registered
00:20:20.389  [2024-12-17 00:51:09.439128] 'SPDK_block_device' volume operations registered
00:20:20.648  [2024-12-17 00:51:09.688876] Inserting cache MalCache1
00:20:20.648  [2024-12-17 00:51:09.689384] MalCache1: Metadata initialized
00:20:20.648  [2024-12-17 00:51:09.689839] MalCache1: Successfully added
00:20:20.648  [2024-12-17 00:51:09.689855] MalCache1: Cache mode : wt
00:20:20.648  [2024-12-17 00:51:09.699626] MalCache1: Super block config offset : 0 kiB
00:20:20.648  [2024-12-17 00:51:09.699648] MalCache1: Super block config size : 2200 B
00:20:20.648  [2024-12-17 00:51:09.699656] MalCache1: Super block runtime offset : 128 kiB
00:20:20.648  [2024-12-17 00:51:09.699662] MalCache1: Super block runtime size : 4 B
00:20:20.648  [2024-12-17 00:51:09.699669] MalCache1: Reserved offset : 256 kiB
00:20:20.648  [2024-12-17 00:51:09.699676] MalCache1: Reserved size : 128 kiB
00:20:20.648  [2024-12-17 00:51:09.699682] MalCache1: Part config offset : 384 kiB
00:20:20.648  [2024-12-17 00:51:09.699689] MalCache1: Part config size : 48 kiB
00:20:20.648  [2024-12-17 00:51:09.699695] MalCache1: Part runtime offset : 640 kiB
00:20:20.648  [2024-12-17 00:51:09.699702] MalCache1: Part runtime size : 72 kiB
00:20:20.648  [2024-12-17 00:51:09.699708] MalCache1: Core config offset : 768 kiB
00:20:20.648  [2024-12-17 00:51:09.699715] MalCache1: Core config size : 512 kiB
00:20:20.648  [2024-12-17 00:51:09.699721] MalCache1: Core runtime offset : 1792 kiB
00:20:20.648  [2024-12-17 00:51:09.699728] MalCache1: Core runtime size : 1172 kiB
00:20:20.648  [2024-12-17 00:51:09.699734] MalCache1: Core UUID offset : 3072 kiB
00:20:20.648  [2024-12-17 00:51:09.699741] MalCache1: Core UUID size : 16384 kiB
00:20:20.648  [2024-12-17 00:51:09.699747] MalCache1: Cleaning offset : 35840 kiB
00:20:20.648  [2024-12-17 00:51:09.699754] MalCache1: Cleaning size : 788 kiB
00:20:20.649  [2024-12-17 00:51:09.699760] MalCache1: LRU list offset : 36736 kiB
00:20:20.649  [2024-12-17 00:51:09.699767] MalCache1: LRU list size : 592 kiB
00:20:20.649  [2024-12-17 00:51:09.699773] MalCache1: Collision offset : 37376 kiB
00:20:20.649  [2024-12-17 00:51:09.699780] MalCache1: Collision size : 788 kiB
00:20:20.649  [2024-12-17 00:51:09.699786] MalCache1: List info offset : 38272 kiB
00:20:20.649  [2024-12-17 00:51:09.699793] MalCache1: List info size : 592 kiB
00:20:20.649  [2024-12-17 00:51:09.699800] MalCache1: Hash offset : 38912 kiB
00:20:20.649  [2024-12-17 00:51:09.699806] MalCache1: Hash size : 68 kiB
00:20:20.649  [2024-12-17 00:51:09.699813] MalCache1: Cache line size: 4 kiB
00:20:20.649  [2024-12-17 00:51:09.699822] MalCache1: Metadata capacity: 20 MiB
00:20:20.649  [2024-12-17 00:51:09.709260] MalCache1: Policy 'always' initialized successfully
00:20:20.908  [2024-12-17 00:51:09.922112] MalCache1: Done saving cache state!
00:20:20.908  [2024-12-17 00:51:09.953731] MalCache1: Cache attached
00:20:20.908  [2024-12-17 00:51:09.953828] MalCache1: Successfully attached
00:20:20.908  [2024-12-17 00:51:09.954097] MalCache1: Inserting core Malloc1
00:20:20.908  [2024-12-17 00:51:09.954121] MalCache1.Malloc1: Seqential cutoff init
00:20:20.908  [2024-12-17 00:51:09.985689] MalCache1.Malloc1: Successfully added
00:20:20.908  [2024-12-17 00:51:09.991491] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:20:20.908  [2024-12-17 00:51:09.991703] MalCache1: Inserting core Malloc2
00:20:20.908  [2024-12-17 00:51:09.991724] MalCache1.Malloc2: Seqential cutoff init
00:20:20.908  [2024-12-17 00:51:10.023932] MalCache1.Malloc2: Successfully added
00:20:20.908  Running I/O for 4 seconds...
00:20:25.100  
00:20:25.100                                                                                                  Latency(us)
00:20:25.100  
[2024-12-16T23:51:14.365Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:25.100  
[2024-12-16T23:51:14.365Z]  Job: MalCache1 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096)
00:20:25.100  	 MalCache1           :       4.00   29906.80     116.82       0.00     0.00    4274.01     733.72    5670.29
00:20:25.100  
[2024-12-16T23:51:14.365Z]  Job: MalCache2 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096)
00:20:25.100  	 MalCache2           :       4.01   29896.52     116.78       0.00     0.00    4273.59     694.54    5670.29
00:20:25.100  
[2024-12-16T23:51:14.365Z]  ===================================================================================================================
00:20:25.100  
[2024-12-16T23:51:14.365Z]  Total                       :              59803.31     233.61       0.00     0.00    4273.80     694.54    5670.29
00:20:25.100  [2024-12-17 00:51:14.061914] MalCache1: Flushing cache
00:20:25.100  [2024-12-17 00:51:14.061942] MalCache1: Flushing cache completed
00:20:25.100  [2024-12-17 00:51:14.063291] MalCache1: Stopping cache
00:20:25.100  [2024-12-17 00:51:14.250458] MalCache1: Done saving cache state!
00:20:25.100  [2024-12-17 00:51:14.263897] Cache MalCache1 successfully stopped
00:20:25.670   00:51:14	-- integrity/bdevperf-iotypes.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w unmap
00:20:25.670    00:51:14	-- integrity/bdevperf-iotypes.sh@14 -- # gen_malloc_ocf_json
00:20:25.670    00:51:14	-- integrity/mallocs.conf@2 -- # local size=300
00:20:25.670    00:51:14	-- integrity/mallocs.conf@3 -- # local block_size=512
00:20:25.670    00:51:14	-- integrity/mallocs.conf@4 -- # local config
00:20:25.670    00:51:14	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:25.670  {
00:20:25.670    "method": "bdev_malloc_create",
00:20:25.670    "params": {
00:20:25.670      "name": "Malloc$malloc",
00:20:25.670      "num_blocks": $(( (size << 20) / block_size )),
00:20:25.670      "block_size": 512
00:20:25.670    }
00:20:25.670  }
00:20:25.670  JSON
00:20:25.670  )")
00:20:25.670     00:51:14	-- integrity/mallocs.conf@21 -- # cat
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:25.670  {
00:20:25.670    "method": "bdev_malloc_create",
00:20:25.670    "params": {
00:20:25.670      "name": "Malloc$malloc",
00:20:25.670      "num_blocks": $(( (size << 20) / block_size )),
00:20:25.670      "block_size": 512
00:20:25.670    }
00:20:25.670  }
00:20:25.670  JSON
00:20:25.670  )")
00:20:25.670     00:51:14	-- integrity/mallocs.conf@21 -- # cat
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:25.670  {
00:20:25.670    "method": "bdev_malloc_create",
00:20:25.670    "params": {
00:20:25.670      "name": "Malloc$malloc",
00:20:25.670      "num_blocks": $(( (size << 20) / block_size )),
00:20:25.670      "block_size": 512
00:20:25.670    }
00:20:25.670  }
00:20:25.670  JSON
00:20:25.670  )")
00:20:25.670     00:51:14	-- integrity/mallocs.conf@21 -- # cat
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:25.670    00:51:14	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:20:25.670    00:51:14	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:20:25.670    00:51:14	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:20:25.670    00:51:14	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:20:25.670    00:51:14	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:20:25.670    00:51:14	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:20:25.670    00:51:14	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:20:25.670    00:51:14	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:20:25.670    00:51:14	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:25.670    00:51:14	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:25.670  {
00:20:25.670    "method": "bdev_ocf_create",
00:20:25.670    "params": {
00:20:25.670      "name": "MalCache$ocf",
00:20:25.670      "mode": "${ocf_mode[ocf]}",
00:20:25.670      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:25.670      "core_bdev_name": "${ocf_core[ocf]}"
00:20:25.670    }
00:20:25.670  }
00:20:25.670  JSON
00:20:25.670  )")
00:20:25.670     00:51:14	-- integrity/mallocs.conf@44 -- # cat
00:20:25.670    00:51:14	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:25.670    00:51:14	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:25.670  {
00:20:25.670    "method": "bdev_ocf_create",
00:20:25.670    "params": {
00:20:25.670      "name": "MalCache$ocf",
00:20:25.670      "mode": "${ocf_mode[ocf]}",
00:20:25.670      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:25.670      "core_bdev_name": "${ocf_core[ocf]}"
00:20:25.670    }
00:20:25.670  }
00:20:25.670  JSON
00:20:25.670  )")
00:20:25.670     00:51:14	-- integrity/mallocs.conf@44 -- # cat
00:20:25.670    00:51:14	-- integrity/mallocs.conf@47 -- # jq .
00:20:25.670     00:51:14	-- integrity/mallocs.conf@47 -- # IFS=,
00:20:25.670     00:51:14	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:20:25.670    "method": "bdev_malloc_create",
00:20:25.670    "params": {
00:20:25.670      "name": "Malloc0",
00:20:25.670      "num_blocks": 614400,
00:20:25.670      "block_size": 512
00:20:25.670    }
00:20:25.670  },{
00:20:25.670    "method": "bdev_malloc_create",
00:20:25.670    "params": {
00:20:25.670      "name": "Malloc1",
00:20:25.670      "num_blocks": 614400,
00:20:25.670      "block_size": 512
00:20:25.670    }
00:20:25.670  },{
00:20:25.670    "method": "bdev_malloc_create",
00:20:25.670    "params": {
00:20:25.670      "name": "Malloc2",
00:20:25.670      "num_blocks": 614400,
00:20:25.670      "block_size": 512
00:20:25.670    }
00:20:25.670  },{
00:20:25.670    "method": "bdev_ocf_create",
00:20:25.670    "params": {
00:20:25.670      "name": "MalCache1",
00:20:25.670      "mode": "wt",
00:20:25.670      "cache_bdev_name": "Malloc0",
00:20:25.670      "core_bdev_name": "Malloc1"
00:20:25.670    }
00:20:25.670  },{
00:20:25.670    "method": "bdev_ocf_create",
00:20:25.670    "params": {
00:20:25.670      "name": "MalCache2",
00:20:25.670      "mode": "pt",
00:20:25.670      "cache_bdev_name": "Malloc0",
00:20:25.670      "core_bdev_name": "Malloc2"
00:20:25.670    }
00:20:25.670  }'
00:20:25.670  [2024-12-17 00:51:14.868058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:25.670  [2024-12-17 00:51:14.868128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061218 ]
00:20:25.670  EAL: No free 2048 kB hugepages reported on node 1
00:20:25.929  [2024-12-17 00:51:14.975378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:25.929  [2024-12-17 00:51:15.025182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:26.188  [2024-12-17 00:51:15.208353] 'OCF_Core' volume operations registered
00:20:26.188  [2024-12-17 00:51:15.210787] 'OCF_Cache' volume operations registered
00:20:26.188  [2024-12-17 00:51:15.213706] 'OCF Composite' volume operations registered
00:20:26.188  [2024-12-17 00:51:15.216180] 'SPDK_block_device' volume operations registered
00:20:26.447  [2024-12-17 00:51:15.464937] Inserting cache MalCache1
00:20:26.447  [2024-12-17 00:51:15.465405] MalCache1: Metadata initialized
00:20:26.447  [2024-12-17 00:51:15.465849] MalCache1: Successfully added
00:20:26.447  [2024-12-17 00:51:15.465863] MalCache1: Cache mode : wt
00:20:26.447  [2024-12-17 00:51:15.475712] MalCache1: Super block config offset : 0 kiB
00:20:26.447  [2024-12-17 00:51:15.475735] MalCache1: Super block config size : 2200 B
00:20:26.447  [2024-12-17 00:51:15.475743] MalCache1: Super block runtime offset : 128 kiB
00:20:26.447  [2024-12-17 00:51:15.475749] MalCache1: Super block runtime size : 4 B
00:20:26.447  [2024-12-17 00:51:15.475756] MalCache1: Reserved offset : 256 kiB
00:20:26.447  [2024-12-17 00:51:15.475763] MalCache1: Reserved size : 128 kiB
00:20:26.447  [2024-12-17 00:51:15.475769] MalCache1: Part config offset : 384 kiB
00:20:26.447  [2024-12-17 00:51:15.475775] MalCache1: Part config size : 48 kiB
00:20:26.447  [2024-12-17 00:51:15.475782] MalCache1: Part runtime offset : 640 kiB
00:20:26.447  [2024-12-17 00:51:15.475788] MalCache1: Part runtime size : 72 kiB
00:20:26.447  [2024-12-17 00:51:15.475794] MalCache1: Core config offset : 768 kiB
00:20:26.447  [2024-12-17 00:51:15.475801] MalCache1: Core config size : 512 kiB
00:20:26.447  [2024-12-17 00:51:15.475807] MalCache1: Core runtime offset : 1792 kiB
00:20:26.447  [2024-12-17 00:51:15.475814] MalCache1: Core runtime size : 1172 kiB
00:20:26.447  [2024-12-17 00:51:15.475820] MalCache1: Core UUID offset : 3072 kiB
00:20:26.447  [2024-12-17 00:51:15.475826] MalCache1: Core UUID size : 16384 kiB
00:20:26.447  [2024-12-17 00:51:15.475833] MalCache1: Cleaning offset : 35840 kiB
00:20:26.447  [2024-12-17 00:51:15.475839] MalCache1: Cleaning size : 788 kiB
00:20:26.447  [2024-12-17 00:51:15.475846] MalCache1: LRU list offset : 36736 kiB
00:20:26.447  [2024-12-17 00:51:15.475852] MalCache1: LRU list size : 592 kiB
00:20:26.447  [2024-12-17 00:51:15.475858] MalCache1: Collision offset : 37376 kiB
00:20:26.447  [2024-12-17 00:51:15.475865] MalCache1: Collision size : 788 kiB
00:20:26.447  [2024-12-17 00:51:15.475871] MalCache1: List info offset : 38272 kiB
00:20:26.447  [2024-12-17 00:51:15.475877] MalCache1: List info size : 592 kiB
00:20:26.447  [2024-12-17 00:51:15.475884] MalCache1: Hash offset : 38912 kiB
00:20:26.447  [2024-12-17 00:51:15.475895] MalCache1: Hash size : 68 kiB
00:20:26.447  [2024-12-17 00:51:15.475903] MalCache1: Cache line size: 4 kiB
00:20:26.447  [2024-12-17 00:51:15.475911] MalCache1: Metadata capacity: 20 MiB
00:20:26.447  [2024-12-17 00:51:15.485355] MalCache1: Policy 'always' initialized successfully
00:20:26.447  [2024-12-17 00:51:15.697249] MalCache1: Done saving cache state!
00:20:26.711  [2024-12-17 00:51:15.728366] MalCache1: Cache attached
00:20:26.711  [2024-12-17 00:51:15.728462] MalCache1: Successfully attached
00:20:26.711  [2024-12-17 00:51:15.728743] MalCache1: Inserting core Malloc1
00:20:26.711  [2024-12-17 00:51:15.728766] MalCache1.Malloc1: Seqential cutoff init
00:20:26.711  [2024-12-17 00:51:15.759468] MalCache1.Malloc1: Successfully added
00:20:26.711  [2024-12-17 00:51:15.765510] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:20:26.711  [2024-12-17 00:51:15.765740] MalCache1: Inserting core Malloc2
00:20:26.711  [2024-12-17 00:51:15.765762] MalCache1.Malloc2: Seqential cutoff init
00:20:26.711  [2024-12-17 00:51:15.796568] MalCache1.Malloc2: Successfully added
00:20:26.711  Running I/O for 4 seconds...
00:20:30.904  
00:20:30.904                                                                                                  Latency(us)
00:20:30.904  
[2024-12-16T23:51:20.169Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:30.904  
[2024-12-16T23:51:20.169Z]  Job: MalCache1 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096)
00:20:30.904  	 MalCache1           :       4.00   23639.30      92.34       0.00     0.00    5418.47    1189.62 4026531.84
00:20:30.904  
[2024-12-16T23:51:20.169Z]  Job: MalCache2 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096)
00:20:30.904  	 MalCache2           :       4.01   23636.30      92.33       0.00     0.00    5416.87    1032.90 4026531.84
00:20:30.904  
[2024-12-16T23:51:20.169Z]  ===================================================================================================================
00:20:30.904  
[2024-12-16T23:51:20.169Z]  Total                       :              47275.59     184.67       0.00     0.00    5417.67    1032.90 4026531.84
00:20:30.904  [2024-12-17 00:51:19.834800] MalCache1: Flushing cache
00:20:30.904  [2024-12-17 00:51:19.834833] MalCache1: Flushing cache completed
00:20:30.904  [2024-12-17 00:51:19.835616] MalCache1: Stopping cache
00:20:30.904  [2024-12-17 00:51:20.023305] MalCache1: Done saving cache state!
00:20:30.904  [2024-12-17 00:51:20.036902] Cache MalCache1 successfully stopped
00:20:31.473   00:51:20	-- integrity/bdevperf-iotypes.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w write
00:20:31.473    00:51:20	-- integrity/bdevperf-iotypes.sh@15 -- # gen_malloc_ocf_json
00:20:31.473    00:51:20	-- integrity/mallocs.conf@2 -- # local size=300
00:20:31.473    00:51:20	-- integrity/mallocs.conf@3 -- # local block_size=512
00:20:31.473    00:51:20	-- integrity/mallocs.conf@4 -- # local config
00:20:31.473    00:51:20	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:31.473  {
00:20:31.473    "method": "bdev_malloc_create",
00:20:31.473    "params": {
00:20:31.473      "name": "Malloc$malloc",
00:20:31.473      "num_blocks": $(( (size << 20) / block_size )),
00:20:31.473      "block_size": 512
00:20:31.473    }
00:20:31.473  }
00:20:31.473  JSON
00:20:31.473  )")
00:20:31.473     00:51:20	-- integrity/mallocs.conf@21 -- # cat
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:31.473  {
00:20:31.473    "method": "bdev_malloc_create",
00:20:31.473    "params": {
00:20:31.473      "name": "Malloc$malloc",
00:20:31.473      "num_blocks": $(( (size << 20) / block_size )),
00:20:31.473      "block_size": 512
00:20:31.473    }
00:20:31.473  }
00:20:31.473  JSON
00:20:31.473  )")
00:20:31.473     00:51:20	-- integrity/mallocs.conf@21 -- # cat
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:31.473  {
00:20:31.473    "method": "bdev_malloc_create",
00:20:31.473    "params": {
00:20:31.473      "name": "Malloc$malloc",
00:20:31.473      "num_blocks": $(( (size << 20) / block_size )),
00:20:31.473      "block_size": 512
00:20:31.473    }
00:20:31.473  }
00:20:31.473  JSON
00:20:31.473  )")
00:20:31.473     00:51:20	-- integrity/mallocs.conf@21 -- # cat
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:31.473    00:51:20	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:20:31.473    00:51:20	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:20:31.473    00:51:20	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:20:31.473    00:51:20	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:20:31.473    00:51:20	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:20:31.473    00:51:20	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:20:31.473    00:51:20	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:20:31.473    00:51:20	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:20:31.473    00:51:20	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:31.473    00:51:20	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:31.473  {
00:20:31.473    "method": "bdev_ocf_create",
00:20:31.473    "params": {
00:20:31.473      "name": "MalCache$ocf",
00:20:31.473      "mode": "${ocf_mode[ocf]}",
00:20:31.473      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:31.473      "core_bdev_name": "${ocf_core[ocf]}"
00:20:31.473    }
00:20:31.473  }
00:20:31.473  JSON
00:20:31.473  )")
00:20:31.473     00:51:20	-- integrity/mallocs.conf@44 -- # cat
00:20:31.473    00:51:20	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:31.473    00:51:20	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:31.473  {
00:20:31.473    "method": "bdev_ocf_create",
00:20:31.473    "params": {
00:20:31.473      "name": "MalCache$ocf",
00:20:31.473      "mode": "${ocf_mode[ocf]}",
00:20:31.473      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:31.473      "core_bdev_name": "${ocf_core[ocf]}"
00:20:31.473    }
00:20:31.473  }
00:20:31.473  JSON
00:20:31.474  )")
00:20:31.474     00:51:20	-- integrity/mallocs.conf@44 -- # cat
00:20:31.474    00:51:20	-- integrity/mallocs.conf@47 -- # jq .
00:20:31.474     00:51:20	-- integrity/mallocs.conf@47 -- # IFS=,
00:20:31.474     00:51:20	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:20:31.474    "method": "bdev_malloc_create",
00:20:31.474    "params": {
00:20:31.474      "name": "Malloc0",
00:20:31.474      "num_blocks": 614400,
00:20:31.474      "block_size": 512
00:20:31.474    }
00:20:31.474  },{
00:20:31.474    "method": "bdev_malloc_create",
00:20:31.474    "params": {
00:20:31.474      "name": "Malloc1",
00:20:31.474      "num_blocks": 614400,
00:20:31.474      "block_size": 512
00:20:31.474    }
00:20:31.474  },{
00:20:31.474    "method": "bdev_malloc_create",
00:20:31.474    "params": {
00:20:31.474      "name": "Malloc2",
00:20:31.474      "num_blocks": 614400,
00:20:31.474      "block_size": 512
00:20:31.474    }
00:20:31.474  },{
00:20:31.474    "method": "bdev_ocf_create",
00:20:31.474    "params": {
00:20:31.474      "name": "MalCache1",
00:20:31.474      "mode": "wt",
00:20:31.474      "cache_bdev_name": "Malloc0",
00:20:31.474      "core_bdev_name": "Malloc1"
00:20:31.474    }
00:20:31.474  },{
00:20:31.474    "method": "bdev_ocf_create",
00:20:31.474    "params": {
00:20:31.474      "name": "MalCache2",
00:20:31.474      "mode": "pt",
00:20:31.474      "cache_bdev_name": "Malloc0",
00:20:31.474      "core_bdev_name": "Malloc2"
00:20:31.474    }
00:20:31.474  }'
00:20:31.474  [2024-12-17 00:51:20.690700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:31.474  [2024-12-17 00:51:20.690772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062031 ]
00:20:31.733  EAL: No free 2048 kB hugepages reported on node 1
00:20:31.733  [2024-12-17 00:51:20.799085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:31.733  [2024-12-17 00:51:20.849311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:31.996  [2024-12-17 00:51:21.025827] 'OCF_Core' volume operations registered
00:20:31.996  [2024-12-17 00:51:21.028208] 'OCF_Cache' volume operations registered
00:20:31.996  [2024-12-17 00:51:21.031110] 'OCF Composite' volume operations registered
00:20:31.996  [2024-12-17 00:51:21.033499] 'SPDK_block_device' volume operations registered
00:20:32.256  [2024-12-17 00:51:21.259208] Inserting cache MalCache1
00:20:32.256  [2024-12-17 00:51:21.259615] MalCache1: Metadata initialized
00:20:32.256  [2024-12-17 00:51:21.260066] MalCache1: Successfully added
00:20:32.256  [2024-12-17 00:51:21.260080] MalCache1: Cache mode : wt
00:20:32.256  [2024-12-17 00:51:21.268997] MalCache1: Super block config offset : 0 kiB
00:20:32.256  [2024-12-17 00:51:21.269019] MalCache1: Super block config size : 2200 B
00:20:32.256  [2024-12-17 00:51:21.269026] MalCache1: Super block runtime offset : 128 kiB
00:20:32.256  [2024-12-17 00:51:21.269032] MalCache1: Super block runtime size : 4 B
00:20:32.256  [2024-12-17 00:51:21.269039] MalCache1: Reserved offset : 256 kiB
00:20:32.256  [2024-12-17 00:51:21.269046] MalCache1: Reserved size : 128 kiB
00:20:32.256  [2024-12-17 00:51:21.269052] MalCache1: Part config offset : 384 kiB
00:20:32.256  [2024-12-17 00:51:21.269058] MalCache1: Part config size : 48 kiB
00:20:32.256  [2024-12-17 00:51:21.269065] MalCache1: Part runtime offset : 640 kiB
00:20:32.256  [2024-12-17 00:51:21.269071] MalCache1: Part runtime size : 72 kiB
00:20:32.256  [2024-12-17 00:51:21.269078] MalCache1: Core config offset : 768 kiB
00:20:32.256  [2024-12-17 00:51:21.269084] MalCache1: Core config size : 512 kiB
00:20:32.256  [2024-12-17 00:51:21.269090] MalCache1: Core runtime offset : 1792 kiB
00:20:32.256  [2024-12-17 00:51:21.269097] MalCache1: Core runtime size : 1172 kiB
00:20:32.256  [2024-12-17 00:51:21.269103] MalCache1: Core UUID offset : 3072 kiB
00:20:32.256  [2024-12-17 00:51:21.269109] MalCache1: Core UUID size : 16384 kiB
00:20:32.256  [2024-12-17 00:51:21.269116] MalCache1: Cleaning offset : 35840 kiB
00:20:32.256  [2024-12-17 00:51:21.269122] MalCache1: Cleaning size : 788 kiB
00:20:32.256  [2024-12-17 00:51:21.269129] MalCache1: LRU list offset : 36736 kiB
00:20:32.256  [2024-12-17 00:51:21.269135] MalCache1: LRU list size : 592 kiB
00:20:32.256  [2024-12-17 00:51:21.269141] MalCache1: Collision offset : 37376 kiB
00:20:32.256  [2024-12-17 00:51:21.269148] MalCache1: Collision size : 788 kiB
00:20:32.256  [2024-12-17 00:51:21.269154] MalCache1: List info offset : 38272 kiB
00:20:32.256  [2024-12-17 00:51:21.269160] MalCache1: List info size : 592 kiB
00:20:32.256  [2024-12-17 00:51:21.269167] MalCache1: Hash offset : 38912 kiB
00:20:32.256  [2024-12-17 00:51:21.269173] MalCache1: Hash size : 68 kiB
00:20:32.256  [2024-12-17 00:51:21.269180] MalCache1: Cache line size: 4 kiB
00:20:32.256  [2024-12-17 00:51:21.269188] MalCache1: Metadata capacity: 20 MiB
00:20:32.256  [2024-12-17 00:51:21.277710] MalCache1: Policy 'always' initialized successfully
00:20:32.256  [2024-12-17 00:51:21.489276] MalCache1: Done saving cache state!
00:20:32.516  [2024-12-17 00:51:21.520534] MalCache1: Cache attached
00:20:32.516  [2024-12-17 00:51:21.520629] MalCache1: Successfully attached
00:20:32.516  [2024-12-17 00:51:21.520946] MalCache1: Inserting core Malloc1
00:20:32.516  [2024-12-17 00:51:21.520973] MalCache1.Malloc1: Seqential cutoff init
00:20:32.516  [2024-12-17 00:51:21.552140] MalCache1.Malloc1: Successfully added
00:20:32.516  [2024-12-17 00:51:21.557897] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:20:32.516  [2024-12-17 00:51:21.558117] MalCache1: Inserting core Malloc2
00:20:32.516  [2024-12-17 00:51:21.558140] MalCache1.Malloc2: Seqential cutoff init
00:20:32.516  [2024-12-17 00:51:21.589058] MalCache1.Malloc2: Successfully added
00:20:32.516  Running I/O for 4 seconds...
00:20:36.709  
00:20:36.709                                                                                                  Latency(us)
00:20:36.709  
[2024-12-16T23:51:25.974Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:36.709  
[2024-12-16T23:51:25.974Z]  Job: MalCache1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:20:36.709  	 MalCache1           :       4.01   16394.64      64.04       0.00     0.00    7798.92    1431.82   10314.80
00:20:36.709  
[2024-12-16T23:51:25.974Z]  Job: MalCache2 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:20:36.709  	 MalCache2           :       4.01   16395.85      64.05       0.00     0.00    7794.27    1389.08    9972.87
00:20:36.709  
[2024-12-16T23:51:25.974Z]  ===================================================================================================================
00:20:36.709  
[2024-12-16T23:51:25.974Z]  Total                       :              32790.49     128.09       0.00     0.00    7796.59    1389.08   10314.80
00:20:36.709  [2024-12-17 00:51:25.627811] MalCache1: Flushing cache
00:20:36.709  [2024-12-17 00:51:25.627854] MalCache1: Flushing cache completed
00:20:36.709  [2024-12-17 00:51:25.628673] MalCache1: Stopping cache
00:20:36.709  [2024-12-17 00:51:25.816797] MalCache1: Done saving cache state!
00:20:36.709  [2024-12-17 00:51:25.831472] Cache MalCache1 successfully stopped
00:20:37.277  
00:20:37.277  real	0m17.564s
00:20:37.277  user	0m15.961s
00:20:37.277  sys	0m1.695s
00:20:37.277   00:51:26	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:37.277   00:51:26	-- common/autotest_common.sh@10 -- # set +x
00:20:37.277  ************************************
00:20:37.277  END TEST ocf_bdevperf_iotypes
00:20:37.277  ************************************
00:20:37.277   00:51:26	-- ocf/ocf.sh@13 -- # run_test ocf_stats /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh
00:20:37.277   00:51:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:37.277   00:51:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:37.277   00:51:26	-- common/autotest_common.sh@10 -- # set +x
00:20:37.277  ************************************
00:20:37.277  START TEST ocf_stats
00:20:37.277  ************************************
00:20:37.277   00:51:26	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh
00:20:37.536    00:51:26	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:37.536     00:51:26	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:37.536     00:51:26	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:37.536    00:51:26	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:37.536    00:51:26	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:37.536    00:51:26	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:37.536    00:51:26	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:37.536    00:51:26	-- scripts/common.sh@335 -- # IFS=.-:
00:20:37.536    00:51:26	-- scripts/common.sh@335 -- # read -ra ver1
00:20:37.536    00:51:26	-- scripts/common.sh@336 -- # IFS=.-:
00:20:37.536    00:51:26	-- scripts/common.sh@336 -- # read -ra ver2
00:20:37.536    00:51:26	-- scripts/common.sh@337 -- # local 'op=<'
00:20:37.536    00:51:26	-- scripts/common.sh@339 -- # ver1_l=2
00:20:37.536    00:51:26	-- scripts/common.sh@340 -- # ver2_l=1
00:20:37.536    00:51:26	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:37.536    00:51:26	-- scripts/common.sh@343 -- # case "$op" in
00:20:37.536    00:51:26	-- scripts/common.sh@344 -- # : 1
00:20:37.536    00:51:26	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:37.536    00:51:26	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:37.536     00:51:26	-- scripts/common.sh@364 -- # decimal 1
00:20:37.536     00:51:26	-- scripts/common.sh@352 -- # local d=1
00:20:37.536     00:51:26	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:37.536     00:51:26	-- scripts/common.sh@354 -- # echo 1
00:20:37.536    00:51:26	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:37.536     00:51:26	-- scripts/common.sh@365 -- # decimal 2
00:20:37.536     00:51:26	-- scripts/common.sh@352 -- # local d=2
00:20:37.536     00:51:26	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:37.536     00:51:26	-- scripts/common.sh@354 -- # echo 2
00:20:37.536    00:51:26	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:37.536    00:51:26	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:37.536    00:51:26	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:37.536    00:51:26	-- scripts/common.sh@367 -- # return 0
00:20:37.536    00:51:26	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:37.536    00:51:26	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:37.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:37.536  		--rc genhtml_branch_coverage=1
00:20:37.536  		--rc genhtml_function_coverage=1
00:20:37.536  		--rc genhtml_legend=1
00:20:37.536  		--rc geninfo_all_blocks=1
00:20:37.536  		--rc geninfo_unexecuted_blocks=1
00:20:37.536  		
00:20:37.536  		'
00:20:37.536    00:51:26	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:37.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:37.536  		--rc genhtml_branch_coverage=1
00:20:37.536  		--rc genhtml_function_coverage=1
00:20:37.536  		--rc genhtml_legend=1
00:20:37.536  		--rc geninfo_all_blocks=1
00:20:37.536  		--rc geninfo_unexecuted_blocks=1
00:20:37.536  		
00:20:37.536  		'
00:20:37.536    00:51:26	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:37.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:37.536  		--rc genhtml_branch_coverage=1
00:20:37.536  		--rc genhtml_function_coverage=1
00:20:37.536  		--rc genhtml_legend=1
00:20:37.536  		--rc geninfo_all_blocks=1
00:20:37.536  		--rc geninfo_unexecuted_blocks=1
00:20:37.536  		
00:20:37.536  		'
00:20:37.536    00:51:26	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:37.536  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:37.536  		--rc genhtml_branch_coverage=1
00:20:37.536  		--rc genhtml_function_coverage=1
00:20:37.536  		--rc genhtml_legend=1
00:20:37.536  		--rc geninfo_all_blocks=1
00:20:37.536  		--rc geninfo_unexecuted_blocks=1
00:20:37.536  		
00:20:37.536  		'
00:20:37.536   00:51:26	-- integrity/stats.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf
00:20:37.536   00:51:26	-- integrity/stats.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf
00:20:37.536   00:51:26	-- integrity/stats.sh@14 -- # bdev_perf_pid=1062877
00:20:37.536   00:51:26	-- integrity/stats.sh@15 -- # waitforlisten 1062877
00:20:37.536   00:51:26	-- common/autotest_common.sh@829 -- # '[' -z 1062877 ']'
00:20:37.536   00:51:26	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:37.536   00:51:26	-- integrity/stats.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock
00:20:37.536   00:51:26	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:37.536    00:51:26	-- integrity/stats.sh@13 -- # gen_malloc_ocf_json
00:20:37.536   00:51:26	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:37.536  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:37.536   00:51:26	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:37.536    00:51:26	-- integrity/mallocs.conf@2 -- # local size=300
00:20:37.536   00:51:26	-- common/autotest_common.sh@10 -- # set +x
00:20:37.536    00:51:26	-- integrity/mallocs.conf@3 -- # local block_size=512
00:20:37.536    00:51:26	-- integrity/mallocs.conf@4 -- # local config
00:20:37.536    00:51:26	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:20:37.536    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:20:37.536    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:37.536    00:51:26	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:37.536  {
00:20:37.536    "method": "bdev_malloc_create",
00:20:37.536    "params": {
00:20:37.536      "name": "Malloc$malloc",
00:20:37.536      "num_blocks": $(( (size << 20) / block_size )),
00:20:37.536      "block_size": 512
00:20:37.536    }
00:20:37.536  }
00:20:37.536  JSON
00:20:37.536  )")
00:20:37.536     00:51:26	-- integrity/mallocs.conf@21 -- # cat
00:20:37.536    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:37.536    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:37.536    00:51:26	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:37.536  {
00:20:37.536    "method": "bdev_malloc_create",
00:20:37.536    "params": {
00:20:37.536      "name": "Malloc$malloc",
00:20:37.536      "num_blocks": $(( (size << 20) / block_size )),
00:20:37.536      "block_size": 512
00:20:37.536    }
00:20:37.536  }
00:20:37.536  JSON
00:20:37.536  )")
00:20:37.536     00:51:26	-- integrity/mallocs.conf@21 -- # cat
00:20:37.536    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:37.536    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:37.536    00:51:26	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:20:37.536  {
00:20:37.536    "method": "bdev_malloc_create",
00:20:37.536    "params": {
00:20:37.536      "name": "Malloc$malloc",
00:20:37.536      "num_blocks": $(( (size << 20) / block_size )),
00:20:37.536      "block_size": 512
00:20:37.536    }
00:20:37.536  }
00:20:37.536  JSON
00:20:37.536  )")
00:20:37.536     00:51:26	-- integrity/mallocs.conf@21 -- # cat
00:20:37.537    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:20:37.537    00:51:26	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:20:37.537    00:51:26	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:20:37.537    00:51:26	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:20:37.537    00:51:26	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:20:37.537    00:51:26	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:20:37.537    00:51:26	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:20:37.537    00:51:26	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:20:37.537    00:51:26	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:20:37.537    00:51:26	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:20:37.537    00:51:26	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:37.537    00:51:26	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:37.537  {
00:20:37.537    "method": "bdev_ocf_create",
00:20:37.537    "params": {
00:20:37.537      "name": "MalCache$ocf",
00:20:37.537      "mode": "${ocf_mode[ocf]}",
00:20:37.537      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:37.537      "core_bdev_name": "${ocf_core[ocf]}"
00:20:37.537    }
00:20:37.537  }
00:20:37.537  JSON
00:20:37.537  )")
00:20:37.537     00:51:26	-- integrity/mallocs.conf@44 -- # cat
00:20:37.537    00:51:26	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:20:37.537    00:51:26	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:20:37.537  {
00:20:37.537    "method": "bdev_ocf_create",
00:20:37.537    "params": {
00:20:37.537      "name": "MalCache$ocf",
00:20:37.537      "mode": "${ocf_mode[ocf]}",
00:20:37.537      "cache_bdev_name": "${ocf_cache[ocf]}",
00:20:37.537      "core_bdev_name": "${ocf_core[ocf]}"
00:20:37.537    }
00:20:37.537  }
00:20:37.537  JSON
00:20:37.537  )")
00:20:37.537     00:51:26	-- integrity/mallocs.conf@44 -- # cat
00:20:37.537    00:51:26	-- integrity/mallocs.conf@47 -- # jq .
00:20:37.537     00:51:26	-- integrity/mallocs.conf@47 -- # IFS=,
00:20:37.537     00:51:26	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:20:37.537    "method": "bdev_malloc_create",
00:20:37.537    "params": {
00:20:37.537      "name": "Malloc0",
00:20:37.537      "num_blocks": 614400,
00:20:37.537      "block_size": 512
00:20:37.537    }
00:20:37.537  },{
00:20:37.537    "method": "bdev_malloc_create",
00:20:37.537    "params": {
00:20:37.537      "name": "Malloc1",
00:20:37.537      "num_blocks": 614400,
00:20:37.537      "block_size": 512
00:20:37.537    }
00:20:37.537  },{
00:20:37.537    "method": "bdev_malloc_create",
00:20:37.537    "params": {
00:20:37.537      "name": "Malloc2",
00:20:37.537      "num_blocks": 614400,
00:20:37.537      "block_size": 512
00:20:37.537    }
00:20:37.537  },{
00:20:37.537    "method": "bdev_ocf_create",
00:20:37.537    "params": {
00:20:37.537      "name": "MalCache1",
00:20:37.537      "mode": "wt",
00:20:37.537      "cache_bdev_name": "Malloc0",
00:20:37.537      "core_bdev_name": "Malloc1"
00:20:37.537    }
00:20:37.537  },{
00:20:37.537    "method": "bdev_ocf_create",
00:20:37.537    "params": {
00:20:37.537      "name": "MalCache2",
00:20:37.537      "mode": "pt",
00:20:37.537      "cache_bdev_name": "Malloc0",
00:20:37.537      "core_bdev_name": "Malloc2"
00:20:37.537    }
00:20:37.537  }'
00:20:37.537  [2024-12-17 00:51:26.722998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:37.537  [2024-12-17 00:51:26.723068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062877 ]
00:20:37.537  EAL: No free 2048 kB hugepages reported on node 1
00:20:37.795  [2024-12-17 00:51:26.829735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:37.795  [2024-12-17 00:51:26.875736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:37.795  [2024-12-17 00:51:27.033449] 'OCF_Core' volume operations registered
00:20:37.795  [2024-12-17 00:51:27.035595] 'OCF_Cache' volume operations registered
00:20:37.795  [2024-12-17 00:51:27.038163] 'OCF Composite' volume operations registered
00:20:37.795  [2024-12-17 00:51:27.040332] 'SPDK_block_device' volume operations registered
00:20:38.054  [2024-12-17 00:51:27.252111] Inserting cache MalCache1
00:20:38.054  [2024-12-17 00:51:27.252528] MalCache1: Metadata initialized
00:20:38.054  [2024-12-17 00:51:27.252976] MalCache1: Successfully added
00:20:38.054  [2024-12-17 00:51:27.252990] MalCache1: Cache mode : wt
00:20:38.054  [2024-12-17 00:51:27.261853] MalCache1: Super block config offset : 0 kiB
00:20:38.054  [2024-12-17 00:51:27.261876] MalCache1: Super block config size : 2200 B
00:20:38.054  [2024-12-17 00:51:27.261883] MalCache1: Super block runtime offset : 128 kiB
00:20:38.054  [2024-12-17 00:51:27.261894] MalCache1: Super block runtime size : 4 B
00:20:38.054  [2024-12-17 00:51:27.261901] MalCache1: Reserved offset : 256 kiB
00:20:38.054  [2024-12-17 00:51:27.261907] MalCache1: Reserved size : 128 kiB
00:20:38.054  [2024-12-17 00:51:27.261914] MalCache1: Part config offset : 384 kiB
00:20:38.054  [2024-12-17 00:51:27.261920] MalCache1: Part config size : 48 kiB
00:20:38.054  [2024-12-17 00:51:27.261927] MalCache1: Part runtime offset : 640 kiB
00:20:38.054  [2024-12-17 00:51:27.261933] MalCache1: Part runtime size : 72 kiB
00:20:38.054  [2024-12-17 00:51:27.261945] MalCache1: Core config offset : 768 kiB
00:20:38.054  [2024-12-17 00:51:27.261952] MalCache1: Core config size : 512 kiB
00:20:38.054  [2024-12-17 00:51:27.261958] MalCache1: Core runtime offset : 1792 kiB
00:20:38.054  [2024-12-17 00:51:27.261964] MalCache1: Core runtime size : 1172 kiB
00:20:38.054  [2024-12-17 00:51:27.261971] MalCache1: Core UUID offset : 3072 kiB
00:20:38.054  [2024-12-17 00:51:27.261977] MalCache1: Core UUID size : 16384 kiB
00:20:38.054  [2024-12-17 00:51:27.261984] MalCache1: Cleaning offset : 35840 kiB
00:20:38.054  [2024-12-17 00:51:27.261990] MalCache1: Cleaning size : 788 kiB
00:20:38.054  [2024-12-17 00:51:27.261996] MalCache1: LRU list offset : 36736 kiB
00:20:38.054  [2024-12-17 00:51:27.262003] MalCache1: LRU list size : 592 kiB
00:20:38.054  [2024-12-17 00:51:27.262009] MalCache1: Collision offset : 37376 kiB
00:20:38.054  [2024-12-17 00:51:27.262015] MalCache1: Collision size : 788 kiB
00:20:38.054  [2024-12-17 00:51:27.262021] MalCache1: List info offset : 38272 kiB
00:20:38.054  [2024-12-17 00:51:27.262028] MalCache1: List info size : 592 kiB
00:20:38.054  [2024-12-17 00:51:27.262034] MalCache1: Hash offset : 38912 kiB
00:20:38.054  [2024-12-17 00:51:27.262040] MalCache1: Hash size : 68 kiB
00:20:38.054  [2024-12-17 00:51:27.262047] MalCache1: Cache line size: 4 kiB
00:20:38.054  [2024-12-17 00:51:27.262055] MalCache1: Metadata capacity: 20 MiB
00:20:38.054  [2024-12-17 00:51:27.270574] MalCache1: Policy 'always' initialized successfully
00:20:38.312  [2024-12-17 00:51:27.483606] MalCache1: Done saving cache state!
00:20:38.312  [2024-12-17 00:51:27.516707] MalCache1: Cache attached
00:20:38.312  [2024-12-17 00:51:27.516803] MalCache1: Successfully attached
00:20:38.312  [2024-12-17 00:51:27.517100] MalCache1: Inserting core Malloc1
00:20:38.312  [2024-12-17 00:51:27.517122] MalCache1.Malloc1: Seqential cutoff init
00:20:38.312  [2024-12-17 00:51:27.549756] MalCache1.Malloc1: Successfully added
00:20:38.312  [2024-12-17 00:51:27.555157] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:20:38.312  [2024-12-17 00:51:27.555408] MalCache1: Inserting core Malloc2
00:20:38.312  [2024-12-17 00:51:27.555429] MalCache1.Malloc2: Seqential cutoff init
00:20:38.571  [2024-12-17 00:51:27.588091] MalCache1.Malloc2: Successfully added
00:20:38.571  Running I/O for 120 seconds...
00:20:38.571   00:51:27	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:38.571   00:51:27	-- common/autotest_common.sh@862 -- # return 0
00:20:38.571   00:51:27	-- integrity/stats.sh@16 -- # sleep 1
00:20:39.505   00:51:28	-- integrity/stats.sh@17 -- # rpc_cmd bdev_ocf_get_stats MalCache1
00:20:39.505   00:51:28	-- common/autotest_common.sh@561 -- # xtrace_disable
00:20:39.505   00:51:28	-- common/autotest_common.sh@10 -- # set +x
00:20:39.505  {
00:20:39.505  "usage": {
00:20:39.505  "occupancy": {
00:20:39.505  "count": 14496,
00:20:39.505  "percentage": "21.62",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "free": {
00:20:39.505  "count": 38048,
00:20:39.505  "percentage": "56.75",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "clean": {
00:20:39.505  "count": 14496,
00:20:39.505  "percentage": "100.0",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "dirty": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  }
00:20:39.505  },
00:20:39.505  "requests": {
00:20:39.505  "rd_hits": {
00:20:39.505  "count": 2,
00:20:39.505  "percentage": "0.1",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "rd_partial_misses": {
00:20:39.505  "count": 1,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "rd_full_misses": {
00:20:39.505  "count": 1,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "rd_total": {
00:20:39.505  "count": 4,
00:20:39.505  "percentage": "0.2",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "wr_hits": {
00:20:39.505  "count": 8,
00:20:39.505  "percentage": "0.5",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "wr_partial_misses": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "wr_full_misses": {
00:20:39.505  "count": 14488,
00:20:39.505  "percentage": "99.91",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "wr_total": {
00:20:39.505  "count": 14496,
00:20:39.505  "percentage": "99.97",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "rd_pt": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "wr_pt": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "serviced": {
00:20:39.505  "count": 14500,
00:20:39.505  "percentage": "100.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "total": {
00:20:39.505  "count": 14500,
00:20:39.505  "percentage": "100.0",
00:20:39.505  "units": "Requests"
00:20:39.505  }
00:20:39.505  },
00:20:39.505  "blocks": {
00:20:39.505  "core_volume_rd": {
00:20:39.505  "count": 9,
00:20:39.505  "percentage": "0.6",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "core_volume_wr": {
00:20:39.505  "count": 14496,
00:20:39.505  "percentage": "99.93",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "core_volume_total": {
00:20:39.505  "count": 14505,
00:20:39.505  "percentage": "100.0",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "cache_volume_rd": {
00:20:39.505  "count": 2,
00:20:39.505  "percentage": "0.1",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "cache_volume_wr": {
00:20:39.505  "count": 14505,
00:20:39.505  "percentage": "99.98",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "cache_volume_total": {
00:20:39.505  "count": 14507,
00:20:39.505  "percentage": "100.0",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "volume_rd": {
00:20:39.505  "count": 11,
00:20:39.505  "percentage": "0.7",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "volume_wr": {
00:20:39.505  "count": 14496,
00:20:39.505  "percentage": "99.92",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  },
00:20:39.505  "volume_total": {
00:20:39.505  "count": 14507,
00:20:39.505  "percentage": "100.0",
00:20:39.505  "units": "4KiB blocks"
00:20:39.505  }
00:20:39.505  },
00:20:39.505  "errors": {
00:20:39.505  "core_volume_rd": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "core_volume_wr": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "core_volume_total": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "cache_volume_rd": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "cache_volume_wr": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "cache_volume_total": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  },
00:20:39.505  "total": {
00:20:39.505  "count": 0,
00:20:39.505  "percentage": "0.0",
00:20:39.505  "units": "Requests"
00:20:39.505  }
00:20:39.505  }
00:20:39.505  }
00:20:39.505   00:51:28	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:20:39.505   00:51:28	-- integrity/stats.sh@18 -- # kill -9 1062877
00:20:39.505   00:51:28	-- integrity/stats.sh@19 -- # wait 1062877
00:20:39.505  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh: line 19: 1062877 Killed                  $bdevperf --json <(gen_malloc_ocf_json) -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock
00:20:39.505   00:51:28	-- integrity/stats.sh@19 -- # true
00:20:39.505  
00:20:39.505  real	0m2.244s
00:20:39.505  user	0m2.121s
00:20:39.505  sys	0m0.631s
00:20:39.505   00:51:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:39.505   00:51:28	-- common/autotest_common.sh@10 -- # set +x
00:20:39.505  ************************************
00:20:39.505  END TEST ocf_stats
00:20:39.505  ************************************
00:20:39.764   00:51:28	-- ocf/ocf.sh@14 -- # run_test ocf_flush /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh
00:20:39.764   00:51:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:39.764   00:51:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:39.764   00:51:28	-- common/autotest_common.sh@10 -- # set +x
00:20:39.764  ************************************
00:20:39.764  START TEST ocf_flush
00:20:39.764  ************************************
00:20:39.764   00:51:28	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh
00:20:39.764    00:51:28	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:39.764     00:51:28	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:39.764     00:51:28	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:39.764    00:51:28	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:39.764    00:51:28	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:39.764    00:51:28	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:39.764    00:51:28	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:39.764    00:51:28	-- scripts/common.sh@335 -- # IFS=.-:
00:20:39.764    00:51:28	-- scripts/common.sh@335 -- # read -ra ver1
00:20:39.764    00:51:28	-- scripts/common.sh@336 -- # IFS=.-:
00:20:39.764    00:51:28	-- scripts/common.sh@336 -- # read -ra ver2
00:20:39.764    00:51:28	-- scripts/common.sh@337 -- # local 'op=<'
00:20:39.764    00:51:28	-- scripts/common.sh@339 -- # ver1_l=2
00:20:39.764    00:51:28	-- scripts/common.sh@340 -- # ver2_l=1
00:20:39.764    00:51:28	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:39.764    00:51:28	-- scripts/common.sh@343 -- # case "$op" in
00:20:39.764    00:51:28	-- scripts/common.sh@344 -- # : 1
00:20:39.764    00:51:28	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:39.764    00:51:28	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:39.764     00:51:28	-- scripts/common.sh@364 -- # decimal 1
00:20:39.764     00:51:28	-- scripts/common.sh@352 -- # local d=1
00:20:39.764     00:51:28	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:39.764     00:51:28	-- scripts/common.sh@354 -- # echo 1
00:20:39.764    00:51:28	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:39.764     00:51:28	-- scripts/common.sh@365 -- # decimal 2
00:20:39.764     00:51:28	-- scripts/common.sh@352 -- # local d=2
00:20:39.764     00:51:28	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:39.764     00:51:28	-- scripts/common.sh@354 -- # echo 2
00:20:39.764    00:51:28	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:39.764    00:51:28	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:39.764    00:51:28	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:39.764    00:51:28	-- scripts/common.sh@367 -- # return 0
00:20:39.764    00:51:28	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:39.764    00:51:28	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:39.764  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:39.764  		--rc genhtml_branch_coverage=1
00:20:39.764  		--rc genhtml_function_coverage=1
00:20:39.764  		--rc genhtml_legend=1
00:20:39.764  		--rc geninfo_all_blocks=1
00:20:39.764  		--rc geninfo_unexecuted_blocks=1
00:20:39.764  		
00:20:39.764  		'
00:20:39.764    00:51:28	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:39.764  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:39.764  		--rc genhtml_branch_coverage=1
00:20:39.764  		--rc genhtml_function_coverage=1
00:20:39.764  		--rc genhtml_legend=1
00:20:39.764  		--rc geninfo_all_blocks=1
00:20:39.764  		--rc geninfo_unexecuted_blocks=1
00:20:39.764  		
00:20:39.764  		'
00:20:39.764    00:51:28	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:39.764  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:39.764  		--rc genhtml_branch_coverage=1
00:20:39.764  		--rc genhtml_function_coverage=1
00:20:39.764  		--rc genhtml_legend=1
00:20:39.764  		--rc geninfo_all_blocks=1
00:20:39.764  		--rc geninfo_unexecuted_blocks=1
00:20:39.764  		
00:20:39.764  		'
00:20:39.764    00:51:28	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:39.764  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:39.764  		--rc genhtml_branch_coverage=1
00:20:39.764  		--rc genhtml_function_coverage=1
00:20:39.764  		--rc genhtml_legend=1
00:20:39.764  		--rc geninfo_all_blocks=1
00:20:39.764  		--rc geninfo_unexecuted_blocks=1
00:20:39.764  		
00:20:39.764  		'
00:20:39.764   00:51:28	-- integrity/flush.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf
00:20:39.764   00:51:28	-- integrity/flush.sh@11 -- # rpc_py='/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock'
00:20:39.764   00:51:28	-- integrity/flush.sh@73 -- # bdevperf_pid=1063236
00:20:39.764   00:51:28	-- integrity/flush.sh@74 -- # trap 'killprocess $bdevperf_pid' SIGINT SIGTERM EXIT
00:20:39.764   00:51:28	-- integrity/flush.sh@75 -- # waitforlisten 1063236
00:20:39.764   00:51:28	-- common/autotest_common.sh@829 -- # '[' -z 1063236 ']'
00:20:39.764   00:51:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:39.764   00:51:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:39.764   00:51:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:39.764  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:39.764   00:51:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:39.764   00:51:28	-- common/autotest_common.sh@10 -- # set +x
00:20:39.764   00:51:28	-- integrity/flush.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock
00:20:39.764    00:51:28	-- integrity/flush.sh@72 -- # bdevperf_config
00:20:39.764    00:51:28	-- integrity/flush.sh@19 -- # local config
00:20:39.764     00:51:28	-- integrity/flush.sh@50 -- # cat
00:20:39.764    00:51:28	-- integrity/flush.sh@50 -- # config='{
00:20:39.764    "method": "bdev_malloc_create",
00:20:39.764    "params": {
00:20:39.764  "name": "Malloc0",
00:20:39.764  "num_blocks": 102400,
00:20:39.764  "block_size": 512
00:20:39.764    }
00:20:39.764  },
00:20:39.764  {
00:20:39.764    "method": "bdev_malloc_create",
00:20:39.764    "params": {
00:20:39.764  "name": "Malloc1",
00:20:39.764  "num_blocks": 1024000,
00:20:39.764  "block_size": 512
00:20:39.764    }
00:20:39.764  },
00:20:39.764  {
00:20:39.764    "method": "bdev_ocf_create",
00:20:39.764    "params": {
00:20:39.764  "name": "MalCache0",
00:20:39.764  "mode": "wb",
00:20:39.764  "cache_line_size": 4,
00:20:39.764  "cache_bdev_name": "Malloc0",
00:20:39.764  "core_bdev_name": "Malloc1"
00:20:39.764    }
00:20:39.764  }'
00:20:39.764    00:51:28	-- integrity/flush.sh@52 -- # jq .
00:20:39.764     00:51:28	-- integrity/flush.sh@53 -- # IFS=,
00:20:39.764     00:51:28	-- integrity/flush.sh@54 -- # printf '%s\n' '{
00:20:39.764    "method": "bdev_malloc_create",
00:20:39.764    "params": {
00:20:39.764  "name": "Malloc0",
00:20:39.764  "num_blocks": 102400,
00:20:39.764  "block_size": 512
00:20:39.764    }
00:20:39.764  },
00:20:39.764  {
00:20:39.765    "method": "bdev_malloc_create",
00:20:39.765    "params": {
00:20:39.765  "name": "Malloc1",
00:20:39.765  "num_blocks": 1024000,
00:20:39.765  "block_size": 512
00:20:39.765    }
00:20:39.765  },
00:20:39.765  {
00:20:39.765    "method": "bdev_ocf_create",
00:20:39.765    "params": {
00:20:39.765  "name": "MalCache0",
00:20:39.765  "mode": "wb",
00:20:39.765  "cache_line_size": 4,
00:20:39.765  "cache_bdev_name": "Malloc0",
00:20:39.765  "core_bdev_name": "Malloc1"
00:20:39.765    }
00:20:39.765  }'
00:20:39.765  [2024-12-17 00:51:28.994794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:39.765  [2024-12-17 00:51:28.994863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063236 ]
00:20:40.023  EAL: No free 2048 kB hugepages reported on node 1
00:20:40.023  [2024-12-17 00:51:29.101876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:40.023  [2024-12-17 00:51:29.147896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:40.281  [2024-12-17 00:51:29.319610] 'OCF_Core' volume operations registered
00:20:40.281  [2024-12-17 00:51:29.322045] 'OCF_Cache' volume operations registered
00:20:40.281  [2024-12-17 00:51:29.324968] 'OCF Composite' volume operations registered
00:20:40.281  [2024-12-17 00:51:29.327403] 'SPDK_block_device' volume operations registered
00:20:40.281  [2024-12-17 00:51:29.485730] Inserting cache MalCache0
00:20:40.281  [2024-12-17 00:51:29.486218] MalCache0: Metadata initialized
00:20:40.281  [2024-12-17 00:51:29.486664] MalCache0: Successfully added
00:20:40.281  [2024-12-17 00:51:29.486679] MalCache0: Cache mode : wb
00:20:40.281  [2024-12-17 00:51:29.496078] MalCache0: Super block config offset : 0 kiB
00:20:40.281  [2024-12-17 00:51:29.496098] MalCache0: Super block config size : 2200 B
00:20:40.281  [2024-12-17 00:51:29.496105] MalCache0: Super block runtime offset : 128 kiB
00:20:40.281  [2024-12-17 00:51:29.496112] MalCache0: Super block runtime size : 4 B
00:20:40.281  [2024-12-17 00:51:29.496119] MalCache0: Reserved offset : 256 kiB
00:20:40.281  [2024-12-17 00:51:29.496125] MalCache0: Reserved size : 128 kiB
00:20:40.281  [2024-12-17 00:51:29.496132] MalCache0: Part config offset : 384 kiB
00:20:40.281  [2024-12-17 00:51:29.496138] MalCache0: Part config size : 48 kiB
00:20:40.281  [2024-12-17 00:51:29.496145] MalCache0: Part runtime offset : 640 kiB
00:20:40.281  [2024-12-17 00:51:29.496151] MalCache0: Part runtime size : 72 kiB
00:20:40.281  [2024-12-17 00:51:29.496158] MalCache0: Core config offset : 768 kiB
00:20:40.281  [2024-12-17 00:51:29.496164] MalCache0: Core config size : 512 kiB
00:20:40.281  [2024-12-17 00:51:29.496171] MalCache0: Core runtime offset : 1792 kiB
00:20:40.281  [2024-12-17 00:51:29.496178] MalCache0: Core runtime size : 1172 kiB
00:20:40.281  [2024-12-17 00:51:29.496184] MalCache0: Core UUID offset : 3072 kiB
00:20:40.281  [2024-12-17 00:51:29.496191] MalCache0: Core UUID size : 16384 kiB
00:20:40.281  [2024-12-17 00:51:29.496197] MalCache0: Cleaning offset : 35840 kiB
00:20:40.281  [2024-12-17 00:51:29.496204] MalCache0: Cleaning size : 44 kiB
00:20:40.281  [2024-12-17 00:51:29.496210] MalCache0: LRU list offset : 35968 kiB
00:20:40.281  [2024-12-17 00:51:29.496217] MalCache0: LRU list size : 36 kiB
00:20:40.281  [2024-12-17 00:51:29.496223] MalCache0: Collision offset : 36096 kiB
00:20:40.281  [2024-12-17 00:51:29.496230] MalCache0: Collision size : 44 kiB
00:20:40.281  [2024-12-17 00:51:29.496236] MalCache0: List info offset : 36224 kiB
00:20:40.281  [2024-12-17 00:51:29.496243] MalCache0: List info size : 36 kiB
00:20:40.281  [2024-12-17 00:51:29.496249] MalCache0: Hash offset : 36352 kiB
00:20:40.281  [2024-12-17 00:51:29.496256] MalCache0: Hash size : 4 kiB
00:20:40.281  [2024-12-17 00:51:29.496263] MalCache0: Cache line size: 4 kiB
00:20:40.281  [2024-12-17 00:51:29.496271] MalCache0: Metadata capacity: 18 MiB
00:20:40.281  [2024-12-17 00:51:29.505429] MalCache0: Policy 'always' initialized successfully
00:20:40.540  [2024-12-17 00:51:29.594261] MalCache0: Done saving cache state!
00:20:40.540  [2024-12-17 00:51:29.625980] MalCache0: Cache attached
00:20:40.540  [2024-12-17 00:51:29.626076] MalCache0: Successfully attached
00:20:40.540  [2024-12-17 00:51:29.626353] MalCache0: Inserting core Malloc1
00:20:40.540  [2024-12-17 00:51:29.626374] MalCache0.Malloc1: Seqential cutoff init
00:20:40.540  [2024-12-17 00:51:29.658722] MalCache0.Malloc1: Successfully added
00:20:40.540  Running I/O for 120 seconds...
00:20:40.798   00:51:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:40.798   00:51:29	-- common/autotest_common.sh@862 -- # return 0
00:20:40.798   00:51:29	-- integrity/flush.sh@76 -- # sleep 5
00:20:46.061   00:51:34	-- integrity/flush.sh@78 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_start MalCache0
00:20:46.061  [2024-12-17 00:51:35.110256] MalCache0: Flushing cache
00:20:46.061   00:51:35	-- integrity/flush.sh@79 -- # sleep 1
00:20:46.061  [2024-12-17 00:51:35.216830] MalCache0: Flushing cache completed
00:20:46.994   00:51:36	-- integrity/flush.sh@81 -- # check_flush_in_progress
00:20:46.994   00:51:36	-- integrity/flush.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0
00:20:46.994   00:51:36	-- integrity/flush.sh@15 -- # jq -e .in_progress
00:20:47.252   00:51:36	-- integrity/flush.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0
00:20:47.252   00:51:36	-- integrity/flush.sh@84 -- # jq -e '.status == 0'
00:20:47.510  true
00:20:47.510   00:51:36	-- integrity/flush.sh@1 -- # killprocess 1063236
00:20:47.510   00:51:36	-- common/autotest_common.sh@936 -- # '[' -z 1063236 ']'
00:20:47.510   00:51:36	-- common/autotest_common.sh@940 -- # kill -0 1063236
00:20:47.510    00:51:36	-- common/autotest_common.sh@941 -- # uname
00:20:47.510   00:51:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:47.510    00:51:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1063236
00:20:47.510   00:51:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:47.510   00:51:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:47.510   00:51:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1063236'
00:20:47.510  killing process with pid 1063236
00:20:47.510   00:51:36	-- common/autotest_common.sh@955 -- # kill 1063236
00:20:47.510   00:51:36	-- common/autotest_common.sh@960 -- # wait 1063236
00:20:47.510  Received shutdown signal, test time was about 6.947304 seconds
00:20:47.510  
00:20:47.510                                                                                                  Latency(us)
00:20:47.510  
[2024-12-16T23:51:36.775Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:20:47.510  
[2024-12-16T23:51:36.775Z]  Job: MalCache0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:20:47.510  	 MalCache0           :       6.95   39835.08     155.61       0.00     0.00    3209.08     144.25   90724.62
00:20:47.510  
[2024-12-16T23:51:36.775Z]  ===================================================================================================================
00:20:47.510  
[2024-12-16T23:51:36.775Z]  Total                       :              39835.08     155.61       0.00     0.00    3209.08     144.25   90724.62
00:20:47.510  [2024-12-17 00:51:36.638947] MalCache0: Flushing cache
00:20:47.510  [2024-12-17 00:51:36.726437] MalCache0: Flushing cache completed
00:20:47.510  [2024-12-17 00:51:36.726507] MalCache0: Stopping cache
00:20:47.769  [2024-12-17 00:51:36.813473] MalCache0: Done saving cache state!
00:20:47.769  [2024-12-17 00:51:36.830318] Cache MalCache0 successfully stopped
00:20:48.336  
00:20:48.336  real	0m8.586s
00:20:48.336  user	0m8.969s
00:20:48.336  sys	0m0.758s
00:20:48.336   00:51:37	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:48.336   00:51:37	-- common/autotest_common.sh@10 -- # set +x
00:20:48.336  ************************************
00:20:48.336  END TEST ocf_flush
00:20:48.336  ************************************
00:20:48.336   00:51:37	-- ocf/ocf.sh@15 -- # run_test ocf_create_destruct /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh
00:20:48.336   00:51:37	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:48.336   00:51:37	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:48.336   00:51:37	-- common/autotest_common.sh@10 -- # set +x
00:20:48.336  ************************************
00:20:48.336  START TEST ocf_create_destruct
00:20:48.336  ************************************
00:20:48.336   00:51:37	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh
00:20:48.336    00:51:37	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:48.336     00:51:37	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:48.336     00:51:37	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:48.336    00:51:37	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:48.336    00:51:37	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:48.336    00:51:37	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:48.336    00:51:37	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:48.336    00:51:37	-- scripts/common.sh@335 -- # IFS=.-:
00:20:48.336    00:51:37	-- scripts/common.sh@335 -- # read -ra ver1
00:20:48.336    00:51:37	-- scripts/common.sh@336 -- # IFS=.-:
00:20:48.336    00:51:37	-- scripts/common.sh@336 -- # read -ra ver2
00:20:48.336    00:51:37	-- scripts/common.sh@337 -- # local 'op=<'
00:20:48.336    00:51:37	-- scripts/common.sh@339 -- # ver1_l=2
00:20:48.336    00:51:37	-- scripts/common.sh@340 -- # ver2_l=1
00:20:48.336    00:51:37	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:48.336    00:51:37	-- scripts/common.sh@343 -- # case "$op" in
00:20:48.336    00:51:37	-- scripts/common.sh@344 -- # : 1
00:20:48.336    00:51:37	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:48.336    00:51:37	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:48.336     00:51:37	-- scripts/common.sh@364 -- # decimal 1
00:20:48.336     00:51:37	-- scripts/common.sh@352 -- # local d=1
00:20:48.336     00:51:37	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:48.336     00:51:37	-- scripts/common.sh@354 -- # echo 1
00:20:48.336    00:51:37	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:48.336     00:51:37	-- scripts/common.sh@365 -- # decimal 2
00:20:48.336     00:51:37	-- scripts/common.sh@352 -- # local d=2
00:20:48.336     00:51:37	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:48.336     00:51:37	-- scripts/common.sh@354 -- # echo 2
00:20:48.336    00:51:37	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:48.336    00:51:37	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:48.336    00:51:37	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:48.336    00:51:37	-- scripts/common.sh@367 -- # return 0
00:20:48.336    00:51:37	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:48.336    00:51:37	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:48.336  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:48.336  		--rc genhtml_branch_coverage=1
00:20:48.336  		--rc genhtml_function_coverage=1
00:20:48.336  		--rc genhtml_legend=1
00:20:48.336  		--rc geninfo_all_blocks=1
00:20:48.336  		--rc geninfo_unexecuted_blocks=1
00:20:48.336  		
00:20:48.336  		'
00:20:48.336    00:51:37	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:48.336  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:48.336  		--rc genhtml_branch_coverage=1
00:20:48.336  		--rc genhtml_function_coverage=1
00:20:48.336  		--rc genhtml_legend=1
00:20:48.336  		--rc geninfo_all_blocks=1
00:20:48.336  		--rc geninfo_unexecuted_blocks=1
00:20:48.336  		
00:20:48.336  		'
00:20:48.336    00:51:37	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:48.336  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:48.336  		--rc genhtml_branch_coverage=1
00:20:48.336  		--rc genhtml_function_coverage=1
00:20:48.336  		--rc genhtml_legend=1
00:20:48.336  		--rc geninfo_all_blocks=1
00:20:48.336  		--rc geninfo_unexecuted_blocks=1
00:20:48.336  		
00:20:48.336  		'
00:20:48.336    00:51:37	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:48.336  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:48.336  		--rc genhtml_branch_coverage=1
00:20:48.336  		--rc genhtml_function_coverage=1
00:20:48.336  		--rc genhtml_legend=1
00:20:48.336  		--rc geninfo_all_blocks=1
00:20:48.336  		--rc geninfo_unexecuted_blocks=1
00:20:48.336  		
00:20:48.336  		'
00:20:48.336   00:51:37	-- management/create-destruct.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:20:48.336   00:51:37	-- management/create-destruct.sh@21 -- # spdk_pid=1064419
00:20:48.336   00:51:37	-- management/create-destruct.sh@23 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:20:48.336   00:51:37	-- management/create-destruct.sh@25 -- # waitforlisten 1064419
00:20:48.336   00:51:37	-- common/autotest_common.sh@829 -- # '[' -z 1064419 ']'
00:20:48.336   00:51:37	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:48.336   00:51:37	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:48.336   00:51:37	-- management/create-destruct.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:20:48.336   00:51:37	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:48.336  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:48.336   00:51:37	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:48.336   00:51:37	-- common/autotest_common.sh@10 -- # set +x
00:20:48.595  [2024-12-17 00:51:37.625374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:48.595  [2024-12-17 00:51:37.625444] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064419 ]
00:20:48.595  EAL: No free 2048 kB hugepages reported on node 1
00:20:48.595  [2024-12-17 00:51:37.730791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:48.595  [2024-12-17 00:51:37.777491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:48.853  [2024-12-17 00:51:37.936405] 'OCF_Core' volume operations registered
00:20:48.853  [2024-12-17 00:51:37.938559] 'OCF_Cache' volume operations registered
00:20:48.853  [2024-12-17 00:51:37.941143] 'OCF Composite' volume operations registered
00:20:48.853  [2024-12-17 00:51:37.943301] 'SPDK_block_device' volume operations registered
00:20:49.418   00:51:38	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:49.418   00:51:38	-- common/autotest_common.sh@862 -- # return 0
00:20:49.418   00:51:38	-- management/create-destruct.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:49.677  Malloc0
00:20:49.677   00:51:38	-- management/create-destruct.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:49.935  Malloc1
00:20:49.935   00:51:39	-- management/create-destruct.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt Malloc0 NonExisting
00:20:50.193  [2024-12-17 00:51:39.363333] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for core device 'NonExisting' to connect
00:20:50.193  PartCache
00:20:50.193   00:51:39	-- management/create-destruct.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs PartCache
00:20:50.193   00:51:39	-- management/create-destruct.sh@32 -- # jq -e '.[0] | .started == false and .cache.attached and .core.attached == false'
00:20:50.451  true
00:20:50.451   00:51:39	-- management/create-destruct.sh@35 -- # jq -e '.[0] | .name == "PartCache"'
00:20:50.451   00:51:39	-- management/create-destruct.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs NonExisting
00:20:50.709  true
00:20:50.709   00:51:39	-- management/create-destruct.sh@38 -- # bdev_check_claimed Malloc0
00:20:50.709    00:51:39	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:20:50.709    00:51:39	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:50.968   00:51:40	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:50.968   00:51:40	-- management/create-destruct.sh@14 -- # return 0
00:20:50.968   00:51:40	-- management/create-destruct.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete PartCache
00:20:51.226   00:51:40	-- management/create-destruct.sh@44 -- # bdev_check_claimed Malloc0
00:20:51.226    00:51:40	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:20:51.226    00:51:40	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:51.484   00:51:40	-- management/create-destruct.sh@13 -- # '[' false = true ']'
00:20:51.484   00:51:40	-- management/create-destruct.sh@16 -- # return 1
00:20:51.484   00:51:40	-- management/create-destruct.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create FullCache wt Malloc0 Malloc1
00:20:51.743  [2024-12-17 00:51:40.864838] Inserting cache FullCache
00:20:51.743  [2024-12-17 00:51:40.865317] FullCache: Metadata initialized
00:20:51.743  [2024-12-17 00:51:40.865766] FullCache: Successfully added
00:20:51.743  [2024-12-17 00:51:40.865781] FullCache: Cache mode : wt
00:20:51.743  [2024-12-17 00:51:40.875499] FullCache: Super block config offset : 0 kiB
00:20:51.743  [2024-12-17 00:51:40.875524] FullCache: Super block config size : 2200 B
00:20:51.743  [2024-12-17 00:51:40.875531] FullCache: Super block runtime offset : 128 kiB
00:20:51.743  [2024-12-17 00:51:40.875538] FullCache: Super block runtime size : 4 B
00:20:51.743  [2024-12-17 00:51:40.875544] FullCache: Reserved offset : 256 kiB
00:20:51.743  [2024-12-17 00:51:40.875551] FullCache: Reserved size : 128 kiB
00:20:51.743  [2024-12-17 00:51:40.875557] FullCache: Part config offset : 384 kiB
00:20:51.743  [2024-12-17 00:51:40.875564] FullCache: Part config size : 48 kiB
00:20:51.743  [2024-12-17 00:51:40.875570] FullCache: Part runtime offset : 640 kiB
00:20:51.743  [2024-12-17 00:51:40.875577] FullCache: Part runtime size : 72 kiB
00:20:51.743  [2024-12-17 00:51:40.875583] FullCache: Core config offset : 768 kiB
00:20:51.743  [2024-12-17 00:51:40.875590] FullCache: Core config size : 512 kiB
00:20:51.743  [2024-12-17 00:51:40.875596] FullCache: Core runtime offset : 1792 kiB
00:20:51.743  [2024-12-17 00:51:40.875603] FullCache: Core runtime size : 1172 kiB
00:20:51.743  [2024-12-17 00:51:40.875609] FullCache: Core UUID offset : 3072 kiB
00:20:51.743  [2024-12-17 00:51:40.875615] FullCache: Core UUID size : 16384 kiB
00:20:51.743  [2024-12-17 00:51:40.875622] FullCache: Cleaning offset : 35840 kiB
00:20:51.743  [2024-12-17 00:51:40.875628] FullCache: Cleaning size : 196 kiB
00:20:51.743  [2024-12-17 00:51:40.875635] FullCache: LRU list offset : 36096 kiB
00:20:51.743  [2024-12-17 00:51:40.875642] FullCache: LRU list size : 148 kiB
00:20:51.743  [2024-12-17 00:51:40.875648] FullCache: Collision offset : 36352 kiB
00:20:51.743  [2024-12-17 00:51:40.875654] FullCache: Collision size : 196 kiB
00:20:51.743  [2024-12-17 00:51:40.875661] FullCache: List info offset : 36608 kiB
00:20:51.743  [2024-12-17 00:51:40.875667] FullCache: List info size : 148 kiB
00:20:51.743  [2024-12-17 00:51:40.875674] FullCache: Hash offset : 36864 kiB
00:20:51.743  [2024-12-17 00:51:40.875680] FullCache: Hash size : 20 kiB
00:20:51.743  [2024-12-17 00:51:40.875688] FullCache: Cache line size: 4 kiB
00:20:51.743  [2024-12-17 00:51:40.875697] FullCache: Metadata capacity: 18 MiB
00:20:51.743  [2024-12-17 00:51:40.884856] FullCache: Policy 'always' initialized successfully
00:20:51.743  [2024-12-17 00:51:40.998377] FullCache: Done saving cache state!
00:20:52.001  [2024-12-17 00:51:41.030041] FullCache: Cache attached
00:20:52.001  [2024-12-17 00:51:41.030139] FullCache: Successfully attached
00:20:52.001  [2024-12-17 00:51:41.030411] FullCache: Inserting core Malloc1
00:20:52.001  [2024-12-17 00:51:41.030435] FullCache.Malloc1: Seqential cutoff init
00:20:52.001  [2024-12-17 00:51:41.062268] FullCache.Malloc1: Successfully added
00:20:52.001  FullCache
00:20:52.001   00:51:41	-- management/create-destruct.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs FullCache
00:20:52.001   00:51:41	-- management/create-destruct.sh@51 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:52.259  true
00:20:52.259   00:51:41	-- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc0
00:20:52.259    00:51:41	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:20:52.259    00:51:41	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:52.518   00:51:41	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:52.518   00:51:41	-- management/create-destruct.sh@14 -- # return 0
00:20:52.518   00:51:41	-- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc1
00:20:52.518    00:51:41	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:52.518    00:51:41	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1
00:20:52.776   00:51:41	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:52.776   00:51:41	-- management/create-destruct.sh@14 -- # return 0
00:20:52.776   00:51:41	-- management/create-destruct.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete FullCache
00:20:53.035  [2024-12-17 00:51:42.079272] FullCache: Flushing cache
00:20:53.035  [2024-12-17 00:51:42.079304] FullCache: Flushing cache completed
00:20:53.035  [2024-12-17 00:51:42.080311] FullCache.Malloc1: Removing core
00:20:53.035  [2024-12-17 00:51:42.112397] FullCache: Core Malloc1 successfully removed
00:20:53.035  [2024-12-17 00:51:42.112457] FullCache: Stopping cache
00:20:53.035  [2024-12-17 00:51:42.218917] FullCache: Done saving cache state!
00:20:53.035  [2024-12-17 00:51:42.233984] Cache FullCache successfully stopped
00:20:53.035   00:51:42	-- management/create-destruct.sh@60 -- # bdev_check_claimed Malloc0
00:20:53.035    00:51:42	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:20:53.035    00:51:42	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:53.294   00:51:42	-- management/create-destruct.sh@13 -- # '[' false = true ']'
00:20:53.294   00:51:42	-- management/create-destruct.sh@16 -- # return 1
00:20:53.294   00:51:42	-- management/create-destruct.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create HotCache wt Malloc0 Malloc1
00:20:53.552  [2024-12-17 00:51:42.740547] Inserting cache HotCache
00:20:53.552  [2024-12-17 00:51:42.741064] HotCache: Metadata initialized
00:20:53.552  [2024-12-17 00:51:42.741504] HotCache: Successfully added
00:20:53.552  [2024-12-17 00:51:42.741512] HotCache: Cache mode : wt
00:20:53.552  [2024-12-17 00:51:42.751245] HotCache: Super block config offset : 0 kiB
00:20:53.552  [2024-12-17 00:51:42.751268] HotCache: Super block config size : 2200 B
00:20:53.552  [2024-12-17 00:51:42.751275] HotCache: Super block runtime offset : 128 kiB
00:20:53.552  [2024-12-17 00:51:42.751282] HotCache: Super block runtime size : 4 B
00:20:53.552  [2024-12-17 00:51:42.751289] HotCache: Reserved offset : 256 kiB
00:20:53.552  [2024-12-17 00:51:42.751295] HotCache: Reserved size : 128 kiB
00:20:53.552  [2024-12-17 00:51:42.751302] HotCache: Part config offset : 384 kiB
00:20:53.552  [2024-12-17 00:51:42.751308] HotCache: Part config size : 48 kiB
00:20:53.552  [2024-12-17 00:51:42.751315] HotCache: Part runtime offset : 640 kiB
00:20:53.552  [2024-12-17 00:51:42.751321] HotCache: Part runtime size : 72 kiB
00:20:53.552  [2024-12-17 00:51:42.751328] HotCache: Core config offset : 768 kiB
00:20:53.552  [2024-12-17 00:51:42.751334] HotCache: Core config size : 512 kiB
00:20:53.552  [2024-12-17 00:51:42.751341] HotCache: Core runtime offset : 1792 kiB
00:20:53.552  [2024-12-17 00:51:42.751347] HotCache: Core runtime size : 1172 kiB
00:20:53.552  [2024-12-17 00:51:42.751354] HotCache: Core UUID offset : 3072 kiB
00:20:53.552  [2024-12-17 00:51:42.751360] HotCache: Core UUID size : 16384 kiB
00:20:53.552  [2024-12-17 00:51:42.751367] HotCache: Cleaning offset : 35840 kiB
00:20:53.552  [2024-12-17 00:51:42.751373] HotCache: Cleaning size : 196 kiB
00:20:53.552  [2024-12-17 00:51:42.751380] HotCache: LRU list offset : 36096 kiB
00:20:53.552  [2024-12-17 00:51:42.751386] HotCache: LRU list size : 148 kiB
00:20:53.552  [2024-12-17 00:51:42.751393] HotCache: Collision offset : 36352 kiB
00:20:53.552  [2024-12-17 00:51:42.751399] HotCache: Collision size : 196 kiB
00:20:53.552  [2024-12-17 00:51:42.751406] HotCache: List info offset : 36608 kiB
00:20:53.552  [2024-12-17 00:51:42.751412] HotCache: List info size : 148 kiB
00:20:53.552  [2024-12-17 00:51:42.751426] HotCache: Hash offset : 36864 kiB
00:20:53.552  [2024-12-17 00:51:42.751433] HotCache: Hash size : 20 kiB
00:20:53.553  [2024-12-17 00:51:42.751440] HotCache: Cache line size: 4 kiB
00:20:53.553  [2024-12-17 00:51:42.751448] HotCache: Metadata capacity: 18 MiB
00:20:53.553  [2024-12-17 00:51:42.760675] HotCache: Policy 'always' initialized successfully
00:20:53.811  [2024-12-17 00:51:42.874538] HotCache: Done saving cache state!
00:20:53.811  [2024-12-17 00:51:42.905877] HotCache: Cache attached
00:20:53.811  [2024-12-17 00:51:42.905972] HotCache: Successfully attached
00:20:53.811  [2024-12-17 00:51:42.906241] HotCache: Inserting core Malloc1
00:20:53.811  [2024-12-17 00:51:42.906263] HotCache.Malloc1: Seqential cutoff init
00:20:53.811  [2024-12-17 00:51:42.937435] HotCache.Malloc1: Successfully added
00:20:53.811  HotCache
00:20:53.811   00:51:42	-- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc0
00:20:53.811    00:51:42	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:20:53.811    00:51:42	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:54.069   00:51:43	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:54.069   00:51:43	-- management/create-destruct.sh@14 -- # return 0
00:20:54.069   00:51:43	-- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc1
00:20:54.069    00:51:43	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1
00:20:54.069    00:51:43	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:54.327   00:51:43	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:54.327   00:51:43	-- management/create-destruct.sh@14 -- # return 0
00:20:54.327   00:51:43	-- management/create-destruct.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:54.586  [2024-12-17 00:51:43.701594] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'HotCache' because its cache device 'Malloc0' was removed
00:20:54.586  [2024-12-17 00:51:43.701784] HotCache: Flushing cache
00:20:54.586  [2024-12-17 00:51:43.701802] HotCache: Flushing cache completed
00:20:54.586  [2024-12-17 00:51:43.701898] HotCache: Stopping cache
00:20:54.586  [2024-12-17 00:51:43.809641] HotCache: Done saving cache state!
00:20:54.586  [2024-12-17 00:51:43.825065] Cache HotCache successfully stopped
00:20:54.843   00:51:43	-- management/create-destruct.sh@74 -- # bdev_check_claimed Malloc1
00:20:54.843    00:51:43	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1
00:20:54.843    00:51:43	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:55.100   00:51:44	-- management/create-destruct.sh@13 -- # '[' false = true ']'
00:20:55.100   00:51:44	-- management/create-destruct.sh@16 -- # return 1
00:20:55.100    00:51:44	-- management/create-destruct.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:20:55.359   00:51:44	-- management/create-destruct.sh@79 -- # status='[
00:20:55.359    {
00:20:55.359      "name": "Malloc1",
00:20:55.359      "aliases": [
00:20:55.359        "73f357af-e929-4407-9492-4daf915c78b3"
00:20:55.359      ],
00:20:55.359      "product_name": "Malloc disk",
00:20:55.359      "block_size": 512,
00:20:55.359      "num_blocks": 206848,
00:20:55.359      "uuid": "73f357af-e929-4407-9492-4daf915c78b3",
00:20:55.359      "assigned_rate_limits": {
00:20:55.359        "rw_ios_per_sec": 0,
00:20:55.359        "rw_mbytes_per_sec": 0,
00:20:55.359        "r_mbytes_per_sec": 0,
00:20:55.359        "w_mbytes_per_sec": 0
00:20:55.359      },
00:20:55.359      "claimed": false,
00:20:55.359      "zoned": false,
00:20:55.359      "supported_io_types": {
00:20:55.359        "read": true,
00:20:55.359        "write": true,
00:20:55.359        "unmap": true,
00:20:55.359        "write_zeroes": true,
00:20:55.359        "flush": true,
00:20:55.359        "reset": true,
00:20:55.359        "compare": false,
00:20:55.359        "compare_and_write": false,
00:20:55.359        "abort": true,
00:20:55.359        "nvme_admin": false,
00:20:55.359        "nvme_io": false
00:20:55.359      },
00:20:55.359      "memory_domains": [
00:20:55.359        {
00:20:55.359          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:20:55.359          "dma_device_type": 2
00:20:55.359        }
00:20:55.359      ],
00:20:55.359      "driver_specific": {}
00:20:55.359    }
00:20:55.359  ]'
00:20:55.359    00:51:44	-- management/create-destruct.sh@80 -- # jq 'map(select(.name == "HotCache")) == []'
00:20:55.359    00:51:44	-- management/create-destruct.sh@80 -- # echo '[' '{' '"name":' '"Malloc1",' '"aliases":' '[' '"73f357af-e929-4407-9492-4daf915c78b3"' '],' '"product_name":' '"Malloc' 'disk",' '"block_size":' 512, '"num_blocks":' 206848, '"uuid":' '"73f357af-e929-4407-9492-4daf915c78b3",' '"assigned_rate_limits":' '{' '"rw_ios_per_sec":' 0, '"rw_mbytes_per_sec":' 0, '"r_mbytes_per_sec":' 0, '"w_mbytes_per_sec":' 0 '},' '"claimed":' false, '"zoned":' false, '"supported_io_types":' '{' '"read":' true, '"write":' true, '"unmap":' true, '"write_zeroes":' true, '"flush":' true, '"reset":' true, '"compare":' false, '"compare_and_write":' false, '"abort":' true, '"nvme_admin":' false, '"nvme_io":' false '},' '"memory_domains":' '[' '{' '"dma_device_id":' '"SPDK_ACCEL_DMA_DEVICE",' '"dma_device_type":' 2 '}' '],' '"driver_specific":' '{}' '}' ']'
00:20:55.359   00:51:44	-- management/create-destruct.sh@80 -- # gone=true
00:20:55.359   00:51:44	-- management/create-destruct.sh@81 -- # [[ true == false ]]
00:20:55.359   00:51:44	-- management/create-destruct.sh@87 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt NonExisting Malloc1
00:20:55.618  [2024-12-17 00:51:44.671927] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for cache device 'NonExisting' to connect
00:20:55.618  PartCache
00:20:55.618   00:51:44	-- management/create-destruct.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:20:55.618   00:51:44	-- management/create-destruct.sh@91 -- # killprocess 1064419
00:20:55.618   00:51:44	-- common/autotest_common.sh@936 -- # '[' -z 1064419 ']'
00:20:55.618   00:51:44	-- common/autotest_common.sh@940 -- # kill -0 1064419
00:20:55.618    00:51:44	-- common/autotest_common.sh@941 -- # uname
00:20:55.618   00:51:44	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:55.618    00:51:44	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1064419
00:20:55.618   00:51:44	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:55.618   00:51:44	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:55.618   00:51:44	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1064419'
00:20:55.618  killing process with pid 1064419
00:20:55.618   00:51:44	-- common/autotest_common.sh@955 -- # kill 1064419
00:20:55.618   00:51:44	-- common/autotest_common.sh@960 -- # wait 1064419
00:20:55.876  [2024-12-17 00:51:44.890957] bdev.c:2354:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc1'!
00:20:55.876  [2024-12-17 00:51:44.891057] vbdev_ocf.c:1361:hotremove_cb: *NOTICE*: Deinitializing 'PartCache' because its core device 'Malloc1' was removed
00:20:56.135  
00:20:56.135  real	0m7.796s
00:20:56.135  user	0m12.417s
00:20:56.135  sys	0m1.498s
00:20:56.135   00:51:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:56.135   00:51:45	-- common/autotest_common.sh@10 -- # set +x
00:20:56.135  ************************************
00:20:56.135  END TEST ocf_create_destruct
00:20:56.135  ************************************
00:20:56.135   00:51:45	-- ocf/ocf.sh@16 -- # run_test ocf_multicore /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh
00:20:56.135   00:51:45	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:56.135   00:51:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:56.135   00:51:45	-- common/autotest_common.sh@10 -- # set +x
00:20:56.135  ************************************
00:20:56.135  START TEST ocf_multicore
00:20:56.135  ************************************
00:20:56.135   00:51:45	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh
00:20:56.135    00:51:45	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:56.135     00:51:45	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:56.135     00:51:45	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:56.392    00:51:45	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:56.392    00:51:45	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:56.392    00:51:45	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:56.392    00:51:45	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:56.392    00:51:45	-- scripts/common.sh@335 -- # IFS=.-:
00:20:56.392    00:51:45	-- scripts/common.sh@335 -- # read -ra ver1
00:20:56.392    00:51:45	-- scripts/common.sh@336 -- # IFS=.-:
00:20:56.392    00:51:45	-- scripts/common.sh@336 -- # read -ra ver2
00:20:56.392    00:51:45	-- scripts/common.sh@337 -- # local 'op=<'
00:20:56.392    00:51:45	-- scripts/common.sh@339 -- # ver1_l=2
00:20:56.393    00:51:45	-- scripts/common.sh@340 -- # ver2_l=1
00:20:56.393    00:51:45	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:56.393    00:51:45	-- scripts/common.sh@343 -- # case "$op" in
00:20:56.393    00:51:45	-- scripts/common.sh@344 -- # : 1
00:20:56.393    00:51:45	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:56.393    00:51:45	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:56.393     00:51:45	-- scripts/common.sh@364 -- # decimal 1
00:20:56.393     00:51:45	-- scripts/common.sh@352 -- # local d=1
00:20:56.393     00:51:45	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:56.393     00:51:45	-- scripts/common.sh@354 -- # echo 1
00:20:56.393    00:51:45	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:56.393     00:51:45	-- scripts/common.sh@365 -- # decimal 2
00:20:56.393     00:51:45	-- scripts/common.sh@352 -- # local d=2
00:20:56.393     00:51:45	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:56.393     00:51:45	-- scripts/common.sh@354 -- # echo 2
00:20:56.393    00:51:45	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:56.393    00:51:45	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:56.393    00:51:45	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:56.393    00:51:45	-- scripts/common.sh@367 -- # return 0
00:20:56.393    00:51:45	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:56.393    00:51:45	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:56.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:56.393  		--rc genhtml_branch_coverage=1
00:20:56.393  		--rc genhtml_function_coverage=1
00:20:56.393  		--rc genhtml_legend=1
00:20:56.393  		--rc geninfo_all_blocks=1
00:20:56.393  		--rc geninfo_unexecuted_blocks=1
00:20:56.393  		
00:20:56.393  		'
00:20:56.393    00:51:45	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:56.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:56.393  		--rc genhtml_branch_coverage=1
00:20:56.393  		--rc genhtml_function_coverage=1
00:20:56.393  		--rc genhtml_legend=1
00:20:56.393  		--rc geninfo_all_blocks=1
00:20:56.393  		--rc geninfo_unexecuted_blocks=1
00:20:56.393  		
00:20:56.393  		'
00:20:56.393    00:51:45	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:56.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:56.393  		--rc genhtml_branch_coverage=1
00:20:56.393  		--rc genhtml_function_coverage=1
00:20:56.393  		--rc genhtml_legend=1
00:20:56.393  		--rc geninfo_all_blocks=1
00:20:56.393  		--rc geninfo_unexecuted_blocks=1
00:20:56.393  		
00:20:56.393  		'
00:20:56.393    00:51:45	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:56.393  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:56.393  		--rc genhtml_branch_coverage=1
00:20:56.393  		--rc genhtml_function_coverage=1
00:20:56.393  		--rc genhtml_legend=1
00:20:56.393  		--rc geninfo_all_blocks=1
00:20:56.393  		--rc geninfo_unexecuted_blocks=1
00:20:56.393  		
00:20:56.393  		'
00:20:56.393   00:51:45	-- management/multicore.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:20:56.393   00:51:45	-- management/multicore.sh@12 -- # spdk_pid='?'
00:20:56.393   00:51:45	-- management/multicore.sh@24 -- # start_spdk
00:20:56.393   00:51:45	-- management/multicore.sh@15 -- # spdk_pid=1065538
00:20:56.393   00:51:45	-- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:20:56.393   00:51:45	-- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:20:56.393   00:51:45	-- management/multicore.sh@17 -- # waitforlisten 1065538
00:20:56.393   00:51:45	-- common/autotest_common.sh@829 -- # '[' -z 1065538 ']'
00:20:56.393   00:51:45	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:56.393   00:51:45	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:56.393   00:51:45	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:56.393  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:56.393   00:51:45	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:56.393   00:51:45	-- common/autotest_common.sh@10 -- # set +x
00:20:56.393  [2024-12-17 00:51:45.488541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:56.393  [2024-12-17 00:51:45.488603] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065538 ]
00:20:56.393  EAL: No free 2048 kB hugepages reported on node 1
00:20:56.393  [2024-12-17 00:51:45.580091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:56.393  [2024-12-17 00:51:45.630071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:56.651  [2024-12-17 00:51:45.809943] 'OCF_Core' volume operations registered
00:20:56.651  [2024-12-17 00:51:45.812358] 'OCF_Cache' volume operations registered
00:20:56.651  [2024-12-17 00:51:45.815240] 'OCF Composite' volume operations registered
00:20:56.651  [2024-12-17 00:51:45.817619] 'SPDK_block_device' volume operations registered
00:20:57.217   00:51:46	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:57.217   00:51:46	-- common/autotest_common.sh@862 -- # return 0
00:20:57.217   00:51:46	-- management/multicore.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core0
00:20:57.476  Core0
00:20:57.476   00:51:46	-- management/multicore.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core1
00:20:57.734  Core1
00:20:57.734   00:51:46	-- management/multicore.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Core0
00:20:57.992  [2024-12-17 00:51:47.116301] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for cache device 'Cache' to connect
00:20:57.992  C1
00:20:57.992   00:51:47	-- management/multicore.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1
00:20:58.251  [2024-12-17 00:51:47.369005] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for cache device 'Cache' to connect
00:20:58.251  C2
00:20:58.251   00:51:47	-- management/multicore.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:58.251   00:51:47	-- management/multicore.sh@34 -- # jq -e 'any(select(.started)) == false'
00:20:58.509  true
00:20:58.509   00:51:47	-- management/multicore.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache
00:20:58.768  [2024-12-17 00:51:47.895344] Inserting cache C1
00:20:58.768  [2024-12-17 00:51:47.895706] C1: Metadata initialized
00:20:58.768  [2024-12-17 00:51:47.896171] C1: Successfully added
00:20:58.768  [2024-12-17 00:51:47.896187] C1: Cache mode : wt
00:20:58.768  [2024-12-17 00:51:47.896263] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache
00:20:58.768  Cache
00:20:58.768  [2024-12-17 00:51:47.905193] C1: Super block config offset : 0 kiB
00:20:58.768  [2024-12-17 00:51:47.905222] C1: Super block config size : 2200 B
00:20:58.768  [2024-12-17 00:51:47.905229] C1: Super block runtime offset : 128 kiB
00:20:58.768  [2024-12-17 00:51:47.905236] C1: Super block runtime size : 4 B
00:20:58.768  [2024-12-17 00:51:47.905242] C1: Reserved offset : 256 kiB
00:20:58.768  [2024-12-17 00:51:47.905249] C1: Reserved size : 128 kiB
00:20:58.768  [2024-12-17 00:51:47.905256] C1: Part config offset : 384 kiB
00:20:58.768  [2024-12-17 00:51:47.905262] C1: Part config size : 48 kiB
00:20:58.768  [2024-12-17 00:51:47.905269] C1: Part runtime offset : 640 kiB
00:20:58.768  [2024-12-17 00:51:47.905276] C1: Part runtime size : 72 kiB
00:20:58.768  [2024-12-17 00:51:47.905282] C1: Core config offset : 768 kiB
00:20:58.768  [2024-12-17 00:51:47.905289] C1: Core config size : 512 kiB
00:20:58.768  [2024-12-17 00:51:47.905295] C1: Core runtime offset : 1792 kiB
00:20:58.768  [2024-12-17 00:51:47.905302] C1: Core runtime size : 1172 kiB
00:20:58.768  [2024-12-17 00:51:47.905308] C1: Core UUID offset : 3072 kiB
00:20:58.768  [2024-12-17 00:51:47.905315] C1: Core UUID size : 16384 kiB
00:20:58.768  [2024-12-17 00:51:47.905321] C1: Cleaning offset : 35840 kiB
00:20:58.768  [2024-12-17 00:51:47.905328] C1: Cleaning size : 196 kiB
00:20:58.768  [2024-12-17 00:51:47.905334] C1: LRU list offset : 36096 kiB
00:20:58.768  [2024-12-17 00:51:47.905341] C1: LRU list size : 148 kiB
00:20:58.768  [2024-12-17 00:51:47.905347] C1: Collision offset : 36352 kiB
00:20:58.768  [2024-12-17 00:51:47.905354] C1: Collision size : 196 kiB
00:20:58.768  [2024-12-17 00:51:47.905360] C1: List info offset : 36608 kiB
00:20:58.768  [2024-12-17 00:51:47.905366] C1: List info size : 148 kiB
00:20:58.768  [2024-12-17 00:51:47.905373] C1: Hash offset : 36864 kiB
00:20:58.768  [2024-12-17 00:51:47.905380] C1: Hash size : 20 kiB
00:20:58.768  [2024-12-17 00:51:47.905387] C1: Cache line size: 4 kiB
00:20:58.768  [2024-12-17 00:51:47.905396] C1: Metadata capacity: 18 MiB
00:20:58.768  [2024-12-17 00:51:47.914003] C1: Policy 'always' initialized successfully
00:20:58.768   00:51:47	-- management/multicore.sh@39 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:58.768   00:51:47	-- management/multicore.sh@39 -- # jq -e 'all(select(.started)) == true'
00:20:59.027  [2024-12-17 00:51:48.037591] C1: Done saving cache state!
00:20:59.027  [2024-12-17 00:51:48.071209] C1: Cache attached
00:20:59.027  [2024-12-17 00:51:48.071306] C1: Successfully attached
00:20:59.027  [2024-12-17 00:51:48.071595] C1: Inserting core Core1
00:20:59.027  [2024-12-17 00:51:48.071618] C1.Core1: Seqential cutoff init
00:20:59.027  [2024-12-17 00:51:48.104783] C1.Core1: Successfully added
00:20:59.027  [2024-12-17 00:51:48.105571] C1: Inserting core Core0
00:20:59.027  [2024-12-17 00:51:48.105602] C1.Core0: Seqential cutoff init
00:20:59.027  [2024-12-17 00:51:48.139209] C1.Core0: Successfully added
00:20:59.027  true
00:20:59.027   00:51:48	-- management/multicore.sh@43 -- # waitforbdev C2
00:20:59.027   00:51:48	-- common/autotest_common.sh@897 -- # local bdev_name=C2
00:20:59.027   00:51:48	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:20:59.027   00:51:48	-- common/autotest_common.sh@899 -- # local i
00:20:59.027   00:51:48	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:20:59.027   00:51:48	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:20:59.027   00:51:48	-- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:20:59.285   00:51:48	-- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b C2 -t 2000
00:20:59.543  [
00:20:59.543    {
00:20:59.543      "name": "C2",
00:20:59.543      "aliases": [
00:20:59.543        "5cd46a11-d85c-57c2-b70a-676cda07b459"
00:20:59.543      ],
00:20:59.543      "product_name": "SPDK OCF",
00:20:59.543      "block_size": 512,
00:20:59.543      "num_blocks": 2048,
00:20:59.543      "uuid": "5cd46a11-d85c-57c2-b70a-676cda07b459",
00:20:59.543      "assigned_rate_limits": {
00:20:59.543        "rw_ios_per_sec": 0,
00:20:59.543        "rw_mbytes_per_sec": 0,
00:20:59.543        "r_mbytes_per_sec": 0,
00:20:59.543        "w_mbytes_per_sec": 0
00:20:59.543      },
00:20:59.543      "claimed": false,
00:20:59.543      "zoned": false,
00:20:59.543      "supported_io_types": {
00:20:59.543        "read": true,
00:20:59.543        "write": true,
00:20:59.543        "unmap": true,
00:20:59.543        "write_zeroes": true,
00:20:59.543        "flush": true,
00:20:59.543        "reset": false,
00:20:59.543        "compare": false,
00:20:59.543        "compare_and_write": false,
00:20:59.543        "abort": false,
00:20:59.543        "nvme_admin": false,
00:20:59.543        "nvme_io": false
00:20:59.543      },
00:20:59.543      "driver_specific": {
00:20:59.543        "cache_device": "Cache",
00:20:59.543        "core_device": "Core1",
00:20:59.543        "mode": "wt",
00:20:59.543        "cache_line_size": 4,
00:20:59.543        "metadata_volatile": false
00:20:59.543      }
00:20:59.543    }
00:20:59.543  ]
00:20:59.543   00:51:48	-- common/autotest_common.sh@905 -- # return 0
00:20:59.543   00:51:48	-- management/multicore.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete C2
00:20:59.802  [2024-12-17 00:51:48.903438] C1: Flushing cache
00:20:59.802  [2024-12-17 00:51:48.903472] C1: Flushing cache completed
00:20:59.802  [2024-12-17 00:51:48.904483] C1.Core1: Removing core
00:20:59.802  [2024-12-17 00:51:48.939012] C1: Core Core1 successfully removed
00:20:59.802  [2024-12-17 00:51:48.939056] vbdev_ocf.c: 299:stop_vbdev: *NOTICE*: Not stopping cache instance 'Cache' because it is referenced by other OCF bdev
00:20:59.802   00:51:48	-- management/multicore.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C1
00:20:59.802   00:51:48	-- management/multicore.sh@49 -- # jq -e '.[0] | .started'
00:21:00.060  true
00:21:00.060   00:51:49	-- management/multicore.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1
00:21:00.318  [2024-12-17 00:51:49.446178] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache
00:21:00.318  [2024-12-17 00:51:49.446420] C1: Inserting core Core1
00:21:00.318  [2024-12-17 00:51:49.446444] C1.Core1: Seqential cutoff init
00:21:00.318  [2024-12-17 00:51:49.481212] C1.Core1: Successfully added
00:21:00.318  C2
00:21:00.318   00:51:49	-- management/multicore.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C2
00:21:00.318   00:51:49	-- management/multicore.sh@54 -- # jq -e '.[0] | .started'
00:21:00.576  true
00:21:00.576   00:51:49	-- management/multicore.sh@59 -- # stop_spdk
00:21:00.576   00:51:49	-- management/multicore.sh@20 -- # killprocess 1065538
00:21:00.576   00:51:49	-- common/autotest_common.sh@936 -- # '[' -z 1065538 ']'
00:21:00.576   00:51:49	-- common/autotest_common.sh@940 -- # kill -0 1065538
00:21:00.576    00:51:49	-- common/autotest_common.sh@941 -- # uname
00:21:00.576   00:51:49	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:00.577    00:51:49	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1065538
00:21:00.577   00:51:49	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:00.577   00:51:49	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:00.577   00:51:49	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1065538'
00:21:00.577  killing process with pid 1065538
00:21:00.577   00:51:49	-- common/autotest_common.sh@955 -- # kill 1065538
00:21:00.577   00:51:49	-- common/autotest_common.sh@960 -- # wait 1065538
00:21:00.835  [2024-12-17 00:51:49.967701] C1: Flushing cache
00:21:00.835  [2024-12-17 00:51:49.967755] C1: Flushing cache completed
00:21:00.835  [2024-12-17 00:51:49.967802] C1: Stopping cache
00:21:00.835  [2024-12-17 00:51:50.091748] C1: Done saving cache state!
00:21:01.093  [2024-12-17 00:51:50.107631] Cache C1 successfully stopped
00:21:01.352   00:51:50	-- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT
00:21:01.352   00:51:50	-- management/multicore.sh@62 -- # start_spdk
00:21:01.352   00:51:50	-- management/multicore.sh@15 -- # spdk_pid=1066140
00:21:01.352   00:51:50	-- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:21:01.352   00:51:50	-- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:21:01.352   00:51:50	-- management/multicore.sh@17 -- # waitforlisten 1066140
00:21:01.352   00:51:50	-- common/autotest_common.sh@829 -- # '[' -z 1066140 ']'
00:21:01.352   00:51:50	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:01.352   00:51:50	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:01.352   00:51:50	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:01.352  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:01.352   00:51:50	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:01.352   00:51:50	-- common/autotest_common.sh@10 -- # set +x
00:21:01.352  [2024-12-17 00:51:50.505143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:01.352  [2024-12-17 00:51:50.505219] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066140 ]
00:21:01.352  EAL: No free 2048 kB hugepages reported on node 1
00:21:01.352  [2024-12-17 00:51:50.601321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:01.611  [2024-12-17 00:51:50.649244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:01.611  [2024-12-17 00:51:50.808460] 'OCF_Core' volume operations registered
00:21:01.611  [2024-12-17 00:51:50.810604] 'OCF_Cache' volume operations registered
00:21:01.611  [2024-12-17 00:51:50.813178] 'OCF Composite' volume operations registered
00:21:01.611  [2024-12-17 00:51:50.815366] 'SPDK_block_device' volume operations registered
00:21:02.546   00:51:51	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:02.546   00:51:51	-- common/autotest_common.sh@862 -- # return 0
00:21:02.546   00:51:51	-- management/multicore.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache
00:21:02.546  Cache
00:21:02.546   00:51:51	-- management/multicore.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc
00:21:02.804  Malloc
00:21:02.804   00:51:52	-- management/multicore.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core
00:21:03.062  Core
00:21:03.062   00:51:52	-- management/multicore.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Malloc
00:21:03.321  [2024-12-17 00:51:52.530554] Inserting cache C1
00:21:03.321  [2024-12-17 00:51:52.530969] C1: Metadata initialized
00:21:03.321  [2024-12-17 00:51:52.531412] C1: Successfully added
00:21:03.321  [2024-12-17 00:51:52.531427] C1: Cache mode : wt
00:21:03.321  [2024-12-17 00:51:52.540265] C1: Super block config offset : 0 kiB
00:21:03.321  [2024-12-17 00:51:52.540287] C1: Super block config size : 2200 B
00:21:03.321  [2024-12-17 00:51:52.540295] C1: Super block runtime offset : 128 kiB
00:21:03.321  [2024-12-17 00:51:52.540301] C1: Super block runtime size : 4 B
00:21:03.321  [2024-12-17 00:51:52.540308] C1: Reserved offset : 256 kiB
00:21:03.321  [2024-12-17 00:51:52.540315] C1: Reserved size : 128 kiB
00:21:03.321  [2024-12-17 00:51:52.540321] C1: Part config offset : 384 kiB
00:21:03.321  [2024-12-17 00:51:52.540328] C1: Part config size : 48 kiB
00:21:03.321  [2024-12-17 00:51:52.540334] C1: Part runtime offset : 640 kiB
00:21:03.321  [2024-12-17 00:51:52.540341] C1: Part runtime size : 72 kiB
00:21:03.321  [2024-12-17 00:51:52.540347] C1: Core config offset : 768 kiB
00:21:03.321  [2024-12-17 00:51:52.540354] C1: Core config size : 512 kiB
00:21:03.321  [2024-12-17 00:51:52.540360] C1: Core runtime offset : 1792 kiB
00:21:03.321  [2024-12-17 00:51:52.540367] C1: Core runtime size : 1172 kiB
00:21:03.321  [2024-12-17 00:51:52.540373] C1: Core UUID offset : 3072 kiB
00:21:03.321  [2024-12-17 00:51:52.540380] C1: Core UUID size : 16384 kiB
00:21:03.321  [2024-12-17 00:51:52.540386] C1: Cleaning offset : 35840 kiB
00:21:03.321  [2024-12-17 00:51:52.540392] C1: Cleaning size : 196 kiB
00:21:03.321  [2024-12-17 00:51:52.540399] C1: LRU list offset : 36096 kiB
00:21:03.321  [2024-12-17 00:51:52.540405] C1: LRU list size : 148 kiB
00:21:03.321  [2024-12-17 00:51:52.540412] C1: Collision offset : 36352 kiB
00:21:03.321  [2024-12-17 00:51:52.540418] C1: Collision size : 196 kiB
00:21:03.321  [2024-12-17 00:51:52.540425] C1: List info offset : 36608 kiB
00:21:03.321  [2024-12-17 00:51:52.540437] C1: List info size : 148 kiB
00:21:03.321  [2024-12-17 00:51:52.540444] C1: Hash offset : 36864 kiB
00:21:03.321  [2024-12-17 00:51:52.540451] C1: Hash size : 20 kiB
00:21:03.321  [2024-12-17 00:51:52.540458] C1: Cache line size: 4 kiB
00:21:03.321  [2024-12-17 00:51:52.540466] C1: Metadata capacity: 18 MiB
00:21:03.321  [2024-12-17 00:51:52.548832] C1: Policy 'always' initialized successfully
00:21:03.580  [2024-12-17 00:51:52.661593] C1: Done saving cache state!
00:21:03.580  [2024-12-17 00:51:52.692470] C1: Cache attached
00:21:03.580  [2024-12-17 00:51:52.692566] C1: Successfully attached
00:21:03.580  [2024-12-17 00:51:52.692832] C1: Inserting core Malloc
00:21:03.580  [2024-12-17 00:51:52.692854] C1.Malloc: Seqential cutoff init
00:21:03.580  [2024-12-17 00:51:52.723798] C1.Malloc: Successfully added
00:21:03.580  C1
00:21:03.580   00:51:52	-- management/multicore.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core
00:21:03.839  [2024-12-17 00:51:52.978359] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache
00:21:03.839  [2024-12-17 00:51:52.978580] C1: Inserting core Core
00:21:03.839  [2024-12-17 00:51:52.978602] C1.Core: Seqential cutoff init
00:21:03.839  [2024-12-17 00:51:53.010413] C1.Core: Successfully added
00:21:03.839  C2
00:21:03.839   00:51:53	-- management/multicore.sh@71 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs Cache
00:21:03.839   00:51:53	-- management/multicore.sh@72 -- # jq 'length == 2'
00:21:04.107  true
00:21:04.107   00:51:53	-- management/multicore.sh@74 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Cache
00:21:04.407  [2024-12-17 00:51:53.509514] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Cache' was removed
00:21:04.407  [2024-12-17 00:51:53.509559] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Cache' was removed
00:21:04.407  [2024-12-17 00:51:53.509820] C1: Flushing cache
00:21:04.407  [2024-12-17 00:51:53.509838] C1: Flushing cache completed
00:21:04.407  [2024-12-17 00:51:53.510124] C1: Flushing cache
00:21:04.407  [2024-12-17 00:51:53.510135] C1: Flushing cache completed
00:21:04.407  [2024-12-17 00:51:53.510226] C1: Stopping cache
00:21:04.407  [2024-12-17 00:51:53.618324] C1: Done saving cache state!
00:21:04.407  [2024-12-17 00:51:53.633457] Cache C1 successfully stopped
00:21:04.685   00:51:53	-- management/multicore.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:04.685   00:51:53	-- management/multicore.sh@76 -- # jq -e '. == []'
00:21:04.963  true
00:21:04.963   00:51:53	-- management/multicore.sh@81 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Malloc NonExisting
00:21:04.963  [2024-12-17 00:51:54.187757] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for core device 'NonExisting' to connect
00:21:04.963  C1
00:21:04.963   00:51:54	-- management/multicore.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Malloc NonExisting
00:21:05.234  [2024-12-17 00:51:54.436467] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for core device 'NonExisting' to connect
00:21:05.234  C2
00:21:05.234   00:51:54	-- management/multicore.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C3 wt Malloc Core
00:21:05.501  [2024-12-17 00:51:54.689204] Inserting cache C3
00:21:05.501  [2024-12-17 00:51:54.689608] C3: Metadata initialized
00:21:05.501  [2024-12-17 00:51:54.690053] C3: Successfully added
00:21:05.501  [2024-12-17 00:51:54.690061] C3: Cache mode : wt
00:21:05.501  [2024-12-17 00:51:54.698967] C3: Super block config offset : 0 kiB
00:21:05.501  [2024-12-17 00:51:54.698987] C3: Super block config size : 2200 B
00:21:05.501  [2024-12-17 00:51:54.698994] C3: Super block runtime offset : 128 kiB
00:21:05.501  [2024-12-17 00:51:54.699001] C3: Super block runtime size : 4 B
00:21:05.501  [2024-12-17 00:51:54.699008] C3: Reserved offset : 256 kiB
00:21:05.501  [2024-12-17 00:51:54.699015] C3: Reserved size : 128 kiB
00:21:05.501  [2024-12-17 00:51:54.699021] C3: Part config offset : 384 kiB
00:21:05.501  [2024-12-17 00:51:54.699028] C3: Part config size : 48 kiB
00:21:05.501  [2024-12-17 00:51:54.699034] C3: Part runtime offset : 640 kiB
00:21:05.501  [2024-12-17 00:51:54.699041] C3: Part runtime size : 72 kiB
00:21:05.501  [2024-12-17 00:51:54.699047] C3: Core config offset : 768 kiB
00:21:05.501  [2024-12-17 00:51:54.699053] C3: Core config size : 512 kiB
00:21:05.501  [2024-12-17 00:51:54.699060] C3: Core runtime offset : 1792 kiB
00:21:05.501  [2024-12-17 00:51:54.699067] C3: Core runtime size : 1172 kiB
00:21:05.501  [2024-12-17 00:51:54.699079] C3: Core UUID offset : 3072 kiB
00:21:05.501  [2024-12-17 00:51:54.699086] C3: Core UUID size : 16384 kiB
00:21:05.501  [2024-12-17 00:51:54.699092] C3: Cleaning offset : 35840 kiB
00:21:05.501  [2024-12-17 00:51:54.699098] C3: Cleaning size : 196 kiB
00:21:05.501  [2024-12-17 00:51:54.699105] C3: LRU list offset : 36096 kiB
00:21:05.501  [2024-12-17 00:51:54.699111] C3: LRU list size : 148 kiB
00:21:05.501  [2024-12-17 00:51:54.699118] C3: Collision offset : 36352 kiB
00:21:05.501  [2024-12-17 00:51:54.699124] C3: Collision size : 196 kiB
00:21:05.501  [2024-12-17 00:51:54.699131] C3: List info offset : 36608 kiB
00:21:05.501  [2024-12-17 00:51:54.699137] C3: List info size : 148 kiB
00:21:05.501  [2024-12-17 00:51:54.699144] C3: Hash offset : 36864 kiB
00:21:05.501  [2024-12-17 00:51:54.699150] C3: Hash size : 20 kiB
00:21:05.501  [2024-12-17 00:51:54.699157] C3: Cache line size: 4 kiB
00:21:05.501  [2024-12-17 00:51:54.699166] C3: Metadata capacity: 18 MiB
00:21:05.501  [2024-12-17 00:51:54.707657] C3: Policy 'always' initialized successfully
00:21:05.760  [2024-12-17 00:51:54.821616] C3: Done saving cache state!
00:21:05.760  [2024-12-17 00:51:54.853161] C3: Cache attached
00:21:05.760  [2024-12-17 00:51:54.853259] C3: Successfully attached
00:21:05.760  [2024-12-17 00:51:54.853529] C3: Inserting core Core
00:21:05.760  [2024-12-17 00:51:54.853553] C3.Core: Seqential cutoff init
00:21:05.760  [2024-12-17 00:51:54.885181] C3.Core: Successfully added
00:21:05.760  C3
00:21:05.760   00:51:54	-- management/multicore.sh@85 -- # stop_spdk
00:21:05.760   00:51:54	-- management/multicore.sh@20 -- # killprocess 1066140
00:21:05.760   00:51:54	-- common/autotest_common.sh@936 -- # '[' -z 1066140 ']'
00:21:05.760   00:51:54	-- common/autotest_common.sh@940 -- # kill -0 1066140
00:21:05.760    00:51:54	-- common/autotest_common.sh@941 -- # uname
00:21:05.760   00:51:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:05.760    00:51:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1066140
00:21:05.760   00:51:54	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:05.760   00:51:54	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:05.760   00:51:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1066140'
00:21:05.760  killing process with pid 1066140
00:21:05.760   00:51:54	-- common/autotest_common.sh@955 -- # kill 1066140
00:21:05.760   00:51:54	-- common/autotest_common.sh@960 -- # wait 1066140
00:21:06.019  [2024-12-17 00:51:55.116738] C3: Flushing cache
00:21:06.019  [2024-12-17 00:51:55.116792] C3: Flushing cache completed
00:21:06.019  [2024-12-17 00:51:55.116840] C3: Stopping cache
00:21:06.019  [2024-12-17 00:51:55.224871] C3: Done saving cache state!
00:21:06.019  [2024-12-17 00:51:55.241926] Cache C3 successfully stopped
00:21:06.019  [2024-12-17 00:51:55.244075] bdev.c:2354:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc'!
00:21:06.019  [2024-12-17 00:51:55.244133] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Malloc' was removed
00:21:06.019  [2024-12-17 00:51:55.244151] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Malloc' was removed
00:21:06.586   00:51:55	-- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT
00:21:06.586  
00:21:06.586  real	0m10.372s
00:21:06.586  user	0m15.366s
00:21:06.586  sys	0m2.109s
00:21:06.586   00:51:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:06.586   00:51:55	-- common/autotest_common.sh@10 -- # set +x
00:21:06.586  ************************************
00:21:06.586  END TEST ocf_multicore
00:21:06.587  ************************************
00:21:06.587   00:51:55	-- ocf/ocf.sh@17 -- # run_test ocf_remove /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh
00:21:06.587   00:51:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:06.587   00:51:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:06.587   00:51:55	-- common/autotest_common.sh@10 -- # set +x
00:21:06.587  ************************************
00:21:06.587  START TEST ocf_remove
00:21:06.587  ************************************
00:21:06.587   00:51:55	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh
00:21:06.587    00:51:55	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:06.587     00:51:55	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:06.587     00:51:55	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:06.587    00:51:55	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:06.587    00:51:55	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:06.587    00:51:55	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:06.587    00:51:55	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:06.587    00:51:55	-- scripts/common.sh@335 -- # IFS=.-:
00:21:06.587    00:51:55	-- scripts/common.sh@335 -- # read -ra ver1
00:21:06.587    00:51:55	-- scripts/common.sh@336 -- # IFS=.-:
00:21:06.587    00:51:55	-- scripts/common.sh@336 -- # read -ra ver2
00:21:06.587    00:51:55	-- scripts/common.sh@337 -- # local 'op=<'
00:21:06.587    00:51:55	-- scripts/common.sh@339 -- # ver1_l=2
00:21:06.587    00:51:55	-- scripts/common.sh@340 -- # ver2_l=1
00:21:06.587    00:51:55	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:06.587    00:51:55	-- scripts/common.sh@343 -- # case "$op" in
00:21:06.587    00:51:55	-- scripts/common.sh@344 -- # : 1
00:21:06.587    00:51:55	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:06.587    00:51:55	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:06.587     00:51:55	-- scripts/common.sh@364 -- # decimal 1
00:21:06.587     00:51:55	-- scripts/common.sh@352 -- # local d=1
00:21:06.587     00:51:55	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:06.587     00:51:55	-- scripts/common.sh@354 -- # echo 1
00:21:06.587    00:51:55	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:06.587     00:51:55	-- scripts/common.sh@365 -- # decimal 2
00:21:06.587     00:51:55	-- scripts/common.sh@352 -- # local d=2
00:21:06.587     00:51:55	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:06.587     00:51:55	-- scripts/common.sh@354 -- # echo 2
00:21:06.587    00:51:55	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:06.587    00:51:55	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:06.587    00:51:55	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:06.587    00:51:55	-- scripts/common.sh@367 -- # return 0
00:21:06.587    00:51:55	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:06.587    00:51:55	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:06.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:06.587  		--rc genhtml_branch_coverage=1
00:21:06.587  		--rc genhtml_function_coverage=1
00:21:06.587  		--rc genhtml_legend=1
00:21:06.587  		--rc geninfo_all_blocks=1
00:21:06.587  		--rc geninfo_unexecuted_blocks=1
00:21:06.587  		
00:21:06.587  		'
00:21:06.587    00:51:55	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:06.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:06.587  		--rc genhtml_branch_coverage=1
00:21:06.587  		--rc genhtml_function_coverage=1
00:21:06.587  		--rc genhtml_legend=1
00:21:06.587  		--rc geninfo_all_blocks=1
00:21:06.587  		--rc geninfo_unexecuted_blocks=1
00:21:06.587  		
00:21:06.587  		'
00:21:06.587    00:51:55	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:06.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:06.587  		--rc genhtml_branch_coverage=1
00:21:06.587  		--rc genhtml_function_coverage=1
00:21:06.587  		--rc genhtml_legend=1
00:21:06.587  		--rc geninfo_all_blocks=1
00:21:06.587  		--rc geninfo_unexecuted_blocks=1
00:21:06.587  		
00:21:06.587  		'
00:21:06.587    00:51:55	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:06.587  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:06.587  		--rc genhtml_branch_coverage=1
00:21:06.587  		--rc genhtml_function_coverage=1
00:21:06.587  		--rc genhtml_legend=1
00:21:06.587  		--rc geninfo_all_blocks=1
00:21:06.587  		--rc geninfo_unexecuted_blocks=1
00:21:06.587  		
00:21:06.587  		'
00:21:06.587   00:51:55	-- management/remove.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:21:06.587   00:51:55	-- management/remove.sh@12 -- # rm -f
00:21:06.846   00:51:55	-- management/remove.sh@13 -- # truncate -s 128M aio0
00:21:06.846   00:51:55	-- management/remove.sh@14 -- # truncate -s 128M aio1
00:21:06.846   00:51:55	-- management/remove.sh@16 -- # jq .
00:21:06.846   00:51:55	-- management/remove.sh@48 -- # spdk_pid=1066927
00:21:06.846   00:51:55	-- management/remove.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config
00:21:06.846   00:51:55	-- management/remove.sh@50 -- # waitforlisten 1066927
00:21:06.846   00:51:55	-- common/autotest_common.sh@829 -- # '[' -z 1066927 ']'
00:21:06.846   00:51:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:06.846   00:51:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:06.846   00:51:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:06.846  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:06.846   00:51:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:06.846   00:51:55	-- common/autotest_common.sh@10 -- # set +x
00:21:06.846  [2024-12-17 00:51:55.946108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:06.846  [2024-12-17 00:51:55.946184] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066927 ]
00:21:06.846  EAL: No free 2048 kB hugepages reported on node 1
00:21:06.846  [2024-12-17 00:51:56.054104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:06.846  [2024-12-17 00:51:56.100630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:07.104  [2024-12-17 00:51:56.259178] 'OCF_Core' volume operations registered
00:21:07.104  [2024-12-17 00:51:56.261343] 'OCF_Cache' volume operations registered
00:21:07.104  [2024-12-17 00:51:56.263893] 'OCF Composite' volume operations registered
00:21:07.104  [2024-12-17 00:51:56.266054] 'SPDK_block_device' volume operations registered
00:21:07.669   00:51:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:07.669   00:51:56	-- common/autotest_common.sh@862 -- # return 0
00:21:07.669   00:51:56	-- management/remove.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create ocfWT wt aio0 aio1
00:21:07.927  [2024-12-17 00:51:57.140841] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'ocfWT' is waiting for cache device 'aio0' to connect
00:21:07.927  ocfWT
00:21:07.927   00:51:57	-- management/remove.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:07.927   00:51:57	-- management/remove.sh@58 -- # jq -r '.[] .name'
00:21:07.927   00:51:57	-- management/remove.sh@58 -- # grep -qw ocfWT
00:21:08.185   00:51:57	-- management/remove.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete ocfWT
00:21:08.444    00:51:57	-- management/remove.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:08.444    00:51:57	-- management/remove.sh@66 -- # jq -r '.[] | select(.name == "ocfWT") | .name'
00:21:08.702   00:51:57	-- management/remove.sh@66 -- # [[ -z '' ]]
00:21:08.702   00:51:57	-- management/remove.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:21:08.702   00:51:57	-- management/remove.sh@70 -- # killprocess 1066927
00:21:08.702   00:51:57	-- common/autotest_common.sh@936 -- # '[' -z 1066927 ']'
00:21:08.702   00:51:57	-- common/autotest_common.sh@940 -- # kill -0 1066927
00:21:08.702    00:51:57	-- common/autotest_common.sh@941 -- # uname
00:21:08.702   00:51:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:08.702    00:51:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1066927
00:21:08.702   00:51:57	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:08.702   00:51:57	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:08.702   00:51:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1066927'
00:21:08.702  killing process with pid 1066927
00:21:08.702   00:51:57	-- common/autotest_common.sh@955 -- # kill 1066927
00:21:08.703   00:51:57	-- common/autotest_common.sh@960 -- # wait 1066927
00:21:09.269   00:51:58	-- management/remove.sh@74 -- # spdk_pid=1067292
00:21:09.269   00:51:58	-- management/remove.sh@76 -- # trap 'killprocess $spdk_pid; rm -f aio* $curdir/config ocf_bdevs ocf_bdevs_verify; exit 1' SIGINT SIGTERM EXIT
00:21:09.269   00:51:58	-- management/remove.sh@73 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config
00:21:09.269   00:51:58	-- management/remove.sh@78 -- # waitforlisten 1067292
00:21:09.269   00:51:58	-- common/autotest_common.sh@829 -- # '[' -z 1067292 ']'
00:21:09.269   00:51:58	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:09.269   00:51:58	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:09.269   00:51:58	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:09.269  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:09.269   00:51:58	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:09.269   00:51:58	-- common/autotest_common.sh@10 -- # set +x
00:21:09.269  [2024-12-17 00:51:58.503312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:09.269  [2024-12-17 00:51:58.503394] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067292 ]
00:21:09.527  EAL: No free 2048 kB hugepages reported on node 1
00:21:09.527  [2024-12-17 00:51:58.610278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:09.527  [2024-12-17 00:51:58.660272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:09.785  [2024-12-17 00:51:58.829404] 'OCF_Core' volume operations registered
00:21:09.785  [2024-12-17 00:51:58.831627] 'OCF_Cache' volume operations registered
00:21:09.785  [2024-12-17 00:51:58.834248] 'OCF Composite' volume operations registered
00:21:09.785  [2024-12-17 00:51:58.836482] 'SPDK_block_device' volume operations registered
00:21:10.351   00:51:59	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:10.351   00:51:59	-- common/autotest_common.sh@862 -- # return 0
00:21:10.351    00:51:59	-- management/remove.sh@82 -- # jq -r '.[] | select(name == "ocfWT") | .name'
00:21:10.351    00:51:59	-- management/remove.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:10.351  jq: error: name/0 is not defined at <top-level>, line 1:
00:21:10.351  .[] | select(name == "ocfWT") | .name             
00:21:10.351  jq: 1 compile error
00:21:10.609  Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
00:21:10.609  BrokenPipeError: [Errno 32] Broken pipe
00:21:10.609     00:51:59	-- management/remove.sh@82 -- # trap - ERR
00:21:10.609     00:51:59	-- management/remove.sh@82 -- # print_backtrace
00:21:10.609     00:51:59	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:10.609     00:51:59	-- common/autotest_common.sh@1142 -- # return 0
00:21:10.609   00:51:59	-- management/remove.sh@82 -- # [[ -z '' ]]
00:21:10.609   00:51:59	-- management/remove.sh@84 -- # trap - SIGINT SIGTERM EXIT
00:21:10.609   00:51:59	-- management/remove.sh@86 -- # killprocess 1067292
00:21:10.609   00:51:59	-- common/autotest_common.sh@936 -- # '[' -z 1067292 ']'
00:21:10.609   00:51:59	-- common/autotest_common.sh@940 -- # kill -0 1067292
00:21:10.609    00:51:59	-- common/autotest_common.sh@941 -- # uname
00:21:10.609   00:51:59	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:10.609    00:51:59	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1067292
00:21:10.609   00:51:59	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:10.609   00:51:59	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:10.609   00:51:59	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1067292'
00:21:10.609  killing process with pid 1067292
00:21:10.609   00:51:59	-- common/autotest_common.sh@955 -- # kill 1067292
00:21:10.609   00:51:59	-- common/autotest_common.sh@960 -- # wait 1067292
00:21:11.176   00:52:00	-- management/remove.sh@87 -- # rm -f aio0 aio1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config ocf_bdevs ocf_bdevs_verify
00:21:11.176  
00:21:11.176  real	0m4.563s
00:21:11.176  user	0m5.484s
00:21:11.176  sys	0m1.298s
00:21:11.176   00:52:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:11.176   00:52:00	-- common/autotest_common.sh@10 -- # set +x
00:21:11.176  ************************************
00:21:11.176  END TEST ocf_remove
00:21:11.176  ************************************
00:21:11.176   00:52:00	-- ocf/ocf.sh@18 -- # run_test ocf_configuration_change /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh
00:21:11.176   00:52:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:11.176   00:52:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:11.176   00:52:00	-- common/autotest_common.sh@10 -- # set +x
00:21:11.176  ************************************
00:21:11.176  START TEST ocf_configuration_change
00:21:11.176  ************************************
00:21:11.176   00:52:00	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh
00:21:11.176    00:52:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:11.176     00:52:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:11.176     00:52:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:11.434    00:52:00	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:11.434    00:52:00	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:11.434    00:52:00	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:11.434    00:52:00	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:11.434    00:52:00	-- scripts/common.sh@335 -- # IFS=.-:
00:21:11.434    00:52:00	-- scripts/common.sh@335 -- # read -ra ver1
00:21:11.434    00:52:00	-- scripts/common.sh@336 -- # IFS=.-:
00:21:11.434    00:52:00	-- scripts/common.sh@336 -- # read -ra ver2
00:21:11.434    00:52:00	-- scripts/common.sh@337 -- # local 'op=<'
00:21:11.434    00:52:00	-- scripts/common.sh@339 -- # ver1_l=2
00:21:11.434    00:52:00	-- scripts/common.sh@340 -- # ver2_l=1
00:21:11.434    00:52:00	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:11.434    00:52:00	-- scripts/common.sh@343 -- # case "$op" in
00:21:11.434    00:52:00	-- scripts/common.sh@344 -- # : 1
00:21:11.434    00:52:00	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:11.434    00:52:00	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:11.434     00:52:00	-- scripts/common.sh@364 -- # decimal 1
00:21:11.434     00:52:00	-- scripts/common.sh@352 -- # local d=1
00:21:11.434     00:52:00	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:11.434     00:52:00	-- scripts/common.sh@354 -- # echo 1
00:21:11.434    00:52:00	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:11.434     00:52:00	-- scripts/common.sh@365 -- # decimal 2
00:21:11.434     00:52:00	-- scripts/common.sh@352 -- # local d=2
00:21:11.434     00:52:00	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:11.434     00:52:00	-- scripts/common.sh@354 -- # echo 2
00:21:11.434    00:52:00	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:11.434    00:52:00	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:11.434    00:52:00	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:11.434    00:52:00	-- scripts/common.sh@367 -- # return 0
00:21:11.434    00:52:00	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:11.434    00:52:00	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:11.434  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.434  		--rc genhtml_branch_coverage=1
00:21:11.434  		--rc genhtml_function_coverage=1
00:21:11.434  		--rc genhtml_legend=1
00:21:11.434  		--rc geninfo_all_blocks=1
00:21:11.434  		--rc geninfo_unexecuted_blocks=1
00:21:11.434  		
00:21:11.434  		'
00:21:11.434    00:52:00	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:11.434  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.434  		--rc genhtml_branch_coverage=1
00:21:11.434  		--rc genhtml_function_coverage=1
00:21:11.434  		--rc genhtml_legend=1
00:21:11.434  		--rc geninfo_all_blocks=1
00:21:11.434  		--rc geninfo_unexecuted_blocks=1
00:21:11.434  		
00:21:11.434  		'
00:21:11.434    00:52:00	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:11.434  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.434  		--rc genhtml_branch_coverage=1
00:21:11.434  		--rc genhtml_function_coverage=1
00:21:11.434  		--rc genhtml_legend=1
00:21:11.434  		--rc geninfo_all_blocks=1
00:21:11.434  		--rc geninfo_unexecuted_blocks=1
00:21:11.434  		
00:21:11.434  		'
00:21:11.434    00:52:00	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:11.434  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:11.434  		--rc genhtml_branch_coverage=1
00:21:11.434  		--rc genhtml_function_coverage=1
00:21:11.434  		--rc genhtml_legend=1
00:21:11.434  		--rc geninfo_all_blocks=1
00:21:11.434  		--rc geninfo_unexecuted_blocks=1
00:21:11.434  		
00:21:11.434  		'
00:21:11.434   00:52:00	-- management/configuration-change.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:21:11.434   00:52:00	-- management/configuration-change.sh@11 -- # cache_line_sizes=(4 8 16 32 64)
00:21:11.434   00:52:00	-- management/configuration-change.sh@12 -- # cache_modes=(wt wb pt wa wi wo)
00:21:11.434   00:52:00	-- management/configuration-change.sh@15 -- # spdk_pid=1067690
00:21:11.434   00:52:00	-- management/configuration-change.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:21:11.434   00:52:00	-- management/configuration-change.sh@17 -- # waitforlisten 1067690
00:21:11.434   00:52:00	-- common/autotest_common.sh@829 -- # '[' -z 1067690 ']'
00:21:11.434   00:52:00	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:11.434   00:52:00	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:11.434   00:52:00	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:11.434  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:11.434   00:52:00	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:11.434   00:52:00	-- common/autotest_common.sh@10 -- # set +x
00:21:11.434  [2024-12-17 00:52:00.536045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:11.434  [2024-12-17 00:52:00.536121] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067690 ]
00:21:11.434  EAL: No free 2048 kB hugepages reported on node 1
00:21:11.434  [2024-12-17 00:52:00.635581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:11.434  [2024-12-17 00:52:00.683754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:21:11.692  [2024-12-17 00:52:00.865484] 'OCF_Core' volume operations registered
00:21:11.692  [2024-12-17 00:52:00.867925] 'OCF_Cache' volume operations registered
00:21:11.692  [2024-12-17 00:52:00.870811] 'OCF Composite' volume operations registered
00:21:11.692  [2024-12-17 00:52:00.873280] 'SPDK_block_device' volume operations registered
00:21:12.625   00:52:01	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:12.625   00:52:01	-- common/autotest_common.sh@862 -- # return 0
00:21:12.625   00:52:01	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:21:12.625   00:52:01	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:21:12.625  Malloc0
00:21:12.625   00:52:01	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:21:12.883  Malloc1
00:21:12.883   00:52:02	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 4
00:21:13.141  [2024-12-17 00:52:02.312250] Inserting cache Cache0
00:21:13.141  [2024-12-17 00:52:02.312616] Cache0: Metadata initialized
00:21:13.141  [2024-12-17 00:52:02.313067] Cache0: Successfully added
00:21:13.141  [2024-12-17 00:52:02.313082] Cache0: Cache mode : wt
00:21:13.141  [2024-12-17 00:52:02.321873] Cache0: Super block config offset : 0 kiB
00:21:13.141  [2024-12-17 00:52:02.321901] Cache0: Super block config size : 2200 B
00:21:13.141  [2024-12-17 00:52:02.321908] Cache0: Super block runtime offset : 128 kiB
00:21:13.141  [2024-12-17 00:52:02.321915] Cache0: Super block runtime size : 4 B
00:21:13.141  [2024-12-17 00:52:02.321921] Cache0: Reserved offset : 256 kiB
00:21:13.141  [2024-12-17 00:52:02.321928] Cache0: Reserved size : 128 kiB
00:21:13.141  [2024-12-17 00:52:02.321934] Cache0: Part config offset : 384 kiB
00:21:13.141  [2024-12-17 00:52:02.321941] Cache0: Part config size : 48 kiB
00:21:13.141  [2024-12-17 00:52:02.321947] Cache0: Part runtime offset : 640 kiB
00:21:13.141  [2024-12-17 00:52:02.321954] Cache0: Part runtime size : 72 kiB
00:21:13.141  [2024-12-17 00:52:02.321960] Cache0: Core config offset : 768 kiB
00:21:13.141  [2024-12-17 00:52:02.321967] Cache0: Core config size : 512 kiB
00:21:13.141  [2024-12-17 00:52:02.321973] Cache0: Core runtime offset : 1792 kiB
00:21:13.141  [2024-12-17 00:52:02.321979] Cache0: Core runtime size : 1172 kiB
00:21:13.141  [2024-12-17 00:52:02.321986] Cache0: Core UUID offset : 3072 kiB
00:21:13.141  [2024-12-17 00:52:02.321992] Cache0: Core UUID size : 16384 kiB
00:21:13.141  [2024-12-17 00:52:02.321999] Cache0: Cleaning offset : 35840 kiB
00:21:13.141  [2024-12-17 00:52:02.322005] Cache0: Cleaning size : 196 kiB
00:21:13.141  [2024-12-17 00:52:02.322012] Cache0: LRU list offset : 36096 kiB
00:21:13.141  [2024-12-17 00:52:02.322018] Cache0: LRU list size : 148 kiB
00:21:13.141  [2024-12-17 00:52:02.322024] Cache0: Collision offset : 36352 kiB
00:21:13.141  [2024-12-17 00:52:02.322031] Cache0: Collision size : 196 kiB
00:21:13.141  [2024-12-17 00:52:02.322037] Cache0: List info offset : 36608 kiB
00:21:13.141  [2024-12-17 00:52:02.322044] Cache0: List info size : 148 kiB
00:21:13.141  [2024-12-17 00:52:02.322050] Cache0: Hash offset : 36864 kiB
00:21:13.141  [2024-12-17 00:52:02.322057] Cache0: Hash size : 20 kiB
00:21:13.141  [2024-12-17 00:52:02.322064] Cache0: Cache line size: 4 kiB
00:21:13.141  [2024-12-17 00:52:02.322072] Cache0: Metadata capacity: 18 MiB
00:21:13.141  [2024-12-17 00:52:02.330469] Cache0: Policy 'always' initialized successfully
00:21:13.400  [2024-12-17 00:52:02.444216] Cache0: Done saving cache state!
00:21:13.400  [2024-12-17 00:52:02.475839] Cache0: Cache attached
00:21:13.400  [2024-12-17 00:52:02.475935] Cache0: Successfully attached
00:21:13.400  [2024-12-17 00:52:02.476213] Cache0: Inserting core Malloc1
00:21:13.400  [2024-12-17 00:52:02.476237] Cache0.Malloc1: Seqential cutoff init
00:21:13.400  [2024-12-17 00:52:02.507804] Cache0.Malloc1: Successfully added
00:21:13.400  Cache0
00:21:13.400   00:52:02	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:13.400   00:52:02	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:21:13.656  true
00:21:13.656   00:52:02	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 4'
00:21:13.656   00:52:02	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:13.914  true
00:21:13.914   00:52:03	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:13.914   00:52:03	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 4'
00:21:14.172  true
00:21:14.172   00:52:03	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:21:14.430  [2024-12-17 00:52:03.524636] Cache0: Flushing cache
00:21:14.430  [2024-12-17 00:52:03.524671] Cache0: Flushing cache completed
00:21:14.430  [2024-12-17 00:52:03.525684] Cache0.Malloc1: Removing core
00:21:14.430  [2024-12-17 00:52:03.558005] Cache0: Core Malloc1 successfully removed
00:21:14.430  [2024-12-17 00:52:03.558063] Cache0: Stopping cache
00:21:14.430  [2024-12-17 00:52:03.664263] Cache0: Done saving cache state!
00:21:14.430  [2024-12-17 00:52:03.678532] Cache Cache0 successfully stopped
00:21:14.688   00:52:03	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:21:14.946   00:52:03	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:21:15.204   00:52:04	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:21:15.204   00:52:04	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:21:15.462  Malloc0
00:21:15.462   00:52:04	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:21:15.720  Malloc1
00:21:15.720   00:52:04	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 8
00:21:15.979  [2024-12-17 00:52:05.009609] Inserting cache Cache0
00:21:15.979  [2024-12-17 00:52:05.009979] Cache0: Metadata initialized
00:21:15.979  [2024-12-17 00:52:05.010418] Cache0: Successfully added
00:21:15.979  [2024-12-17 00:52:05.010426] Cache0: Cache mode : wt
00:21:15.979  [2024-12-17 00:52:05.019157] Cache0: Super block config offset : 0 kiB
00:21:15.979  [2024-12-17 00:52:05.019179] Cache0: Super block config size : 2200 B
00:21:15.979  [2024-12-17 00:52:05.019186] Cache0: Super block runtime offset : 128 kiB
00:21:15.979  [2024-12-17 00:52:05.019192] Cache0: Super block runtime size : 4 B
00:21:15.979  [2024-12-17 00:52:05.019199] Cache0: Reserved offset : 256 kiB
00:21:15.979  [2024-12-17 00:52:05.019206] Cache0: Reserved size : 128 kiB
00:21:15.979  [2024-12-17 00:52:05.019212] Cache0: Part config offset : 384 kiB
00:21:15.979  [2024-12-17 00:52:05.019219] Cache0: Part config size : 48 kiB
00:21:15.979  [2024-12-17 00:52:05.019225] Cache0: Part runtime offset : 640 kiB
00:21:15.979  [2024-12-17 00:52:05.019232] Cache0: Part runtime size : 72 kiB
00:21:15.979  [2024-12-17 00:52:05.019238] Cache0: Core config offset : 768 kiB
00:21:15.979  [2024-12-17 00:52:05.019244] Cache0: Core config size : 512 kiB
00:21:15.979  [2024-12-17 00:52:05.019251] Cache0: Core runtime offset : 1792 kiB
00:21:15.979  [2024-12-17 00:52:05.019257] Cache0: Core runtime size : 1172 kiB
00:21:15.980  [2024-12-17 00:52:05.019264] Cache0: Core UUID offset : 3072 kiB
00:21:15.980  [2024-12-17 00:52:05.019270] Cache0: Core UUID size : 16384 kiB
00:21:15.980  [2024-12-17 00:52:05.019277] Cache0: Cleaning offset : 35840 kiB
00:21:15.980  [2024-12-17 00:52:05.019283] Cache0: Cleaning size : 100 kiB
00:21:15.980  [2024-12-17 00:52:05.019290] Cache0: LRU list offset : 35968 kiB
00:21:15.980  [2024-12-17 00:52:05.019296] Cache0: LRU list size : 76 kiB
00:21:15.980  [2024-12-17 00:52:05.019303] Cache0: Collision offset : 36096 kiB
00:21:15.980  [2024-12-17 00:52:05.019309] Cache0: Collision size : 116 kiB
00:21:15.980  [2024-12-17 00:52:05.019315] Cache0: List info offset : 36224 kiB
00:21:15.980  [2024-12-17 00:52:05.019322] Cache0: List info size : 76 kiB
00:21:15.980  [2024-12-17 00:52:05.019328] Cache0: Hash offset : 36352 kiB
00:21:15.980  [2024-12-17 00:52:05.019335] Cache0: Hash size : 12 kiB
00:21:15.980  [2024-12-17 00:52:05.019342] Cache0: Cache line size: 8 kiB
00:21:15.980  [2024-12-17 00:52:05.019350] Cache0: Metadata capacity: 18 MiB
00:21:15.980  [2024-12-17 00:52:05.027741] Cache0: Policy 'always' initialized successfully
00:21:15.980  [2024-12-17 00:52:05.125174] Cache0: Done saving cache state!
00:21:15.980  [2024-12-17 00:52:05.155901] Cache0: Cache attached
00:21:15.980  [2024-12-17 00:52:05.155988] Cache0: Successfully attached
00:21:15.980  [2024-12-17 00:52:05.156272] Cache0: Inserting core Malloc1
00:21:15.980  [2024-12-17 00:52:05.156294] Cache0.Malloc1: Seqential cutoff init
00:21:15.980  [2024-12-17 00:52:05.186956] Cache0.Malloc1: Successfully added
00:21:15.980  Cache0
00:21:15.980   00:52:05	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:15.980   00:52:05	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:21:16.238  true
00:21:16.238   00:52:05	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:16.238   00:52:05	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 8'
00:21:16.495  true
00:21:16.495   00:52:05	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:16.495   00:52:05	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 8'
00:21:16.753  true
00:21:16.753   00:52:05	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:21:17.011  [2024-12-17 00:52:06.111715] Cache0: Flushing cache
00:21:17.011  [2024-12-17 00:52:06.111752] Cache0: Flushing cache completed
00:21:17.011  [2024-12-17 00:52:06.112400] Cache0.Malloc1: Removing core
00:21:17.011  [2024-12-17 00:52:06.144529] Cache0: Core Malloc1 successfully removed
00:21:17.011  [2024-12-17 00:52:06.144587] Cache0: Stopping cache
00:21:17.011  [2024-12-17 00:52:06.238707] Cache0: Done saving cache state!
00:21:17.011  [2024-12-17 00:52:06.253817] Cache Cache0 successfully stopped
00:21:17.269   00:52:06	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:21:17.527   00:52:06	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:21:17.784   00:52:06	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:21:17.784   00:52:06	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:21:18.042  Malloc0
00:21:18.042   00:52:07	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:21:18.299  Malloc1
00:21:18.299   00:52:07	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 16
00:21:18.299  [2024-12-17 00:52:07.557356] Inserting cache Cache0
00:21:18.299  [2024-12-17 00:52:07.557775] Cache0: Metadata initialized
00:21:18.299  [2024-12-17 00:52:07.558220] Cache0: Successfully added
00:21:18.299  [2024-12-17 00:52:07.558229] Cache0: Cache mode : wt
00:21:18.558  [2024-12-17 00:52:07.567394] Cache0: Super block config offset : 0 kiB
00:21:18.558  [2024-12-17 00:52:07.567419] Cache0: Super block config size : 2200 B
00:21:18.558  [2024-12-17 00:52:07.567427] Cache0: Super block runtime offset : 128 kiB
00:21:18.558  [2024-12-17 00:52:07.567433] Cache0: Super block runtime size : 4 B
00:21:18.558  [2024-12-17 00:52:07.567440] Cache0: Reserved offset : 256 kiB
00:21:18.558  [2024-12-17 00:52:07.567446] Cache0: Reserved size : 128 kiB
00:21:18.558  [2024-12-17 00:52:07.567453] Cache0: Part config offset : 384 kiB
00:21:18.558  [2024-12-17 00:52:07.567459] Cache0: Part config size : 48 kiB
00:21:18.558  [2024-12-17 00:52:07.567466] Cache0: Part runtime offset : 640 kiB
00:21:18.558  [2024-12-17 00:52:07.567472] Cache0: Part runtime size : 72 kiB
00:21:18.558  [2024-12-17 00:52:07.567479] Cache0: Core config offset : 768 kiB
00:21:18.558  [2024-12-17 00:52:07.567485] Cache0: Core config size : 512 kiB
00:21:18.558  [2024-12-17 00:52:07.567491] Cache0: Core runtime offset : 1792 kiB
00:21:18.558  [2024-12-17 00:52:07.567498] Cache0: Core runtime size : 1172 kiB
00:21:18.558  [2024-12-17 00:52:07.567504] Cache0: Core UUID offset : 3072 kiB
00:21:18.558  [2024-12-17 00:52:07.567510] Cache0: Core UUID size : 16384 kiB
00:21:18.558  [2024-12-17 00:52:07.567517] Cache0: Cleaning offset : 35840 kiB
00:21:18.558  [2024-12-17 00:52:07.567523] Cache0: Cleaning size : 52 kiB
00:21:18.558  [2024-12-17 00:52:07.567530] Cache0: LRU list offset : 35968 kiB
00:21:18.558  [2024-12-17 00:52:07.567536] Cache0: LRU list size : 40 kiB
00:21:18.558  [2024-12-17 00:52:07.567543] Cache0: Collision offset : 36096 kiB
00:21:18.558  [2024-12-17 00:52:07.567549] Cache0: Collision size : 76 kiB
00:21:18.558  [2024-12-17 00:52:07.567555] Cache0: List info offset : 36224 kiB
00:21:18.558  [2024-12-17 00:52:07.567562] Cache0: List info size : 40 kiB
00:21:18.558  [2024-12-17 00:52:07.567568] Cache0: Hash offset : 36352 kiB
00:21:18.558  [2024-12-17 00:52:07.567575] Cache0: Hash size : 8 kiB
00:21:18.558  [2024-12-17 00:52:07.567582] Cache0: Cache line size: 16 kiB
00:21:18.558  [2024-12-17 00:52:07.567590] Cache0: Metadata capacity: 18 MiB
00:21:18.558  [2024-12-17 00:52:07.576012] Cache0: Policy 'always' initialized successfully
00:21:18.558  [2024-12-17 00:52:07.666645] Cache0: Done saving cache state!
00:21:18.558  [2024-12-17 00:52:07.698540] Cache0: Cache attached
00:21:18.558  [2024-12-17 00:52:07.698636] Cache0: Successfully attached
00:21:18.558  [2024-12-17 00:52:07.698935] Cache0: Inserting core Malloc1
00:21:18.558  [2024-12-17 00:52:07.698962] Cache0.Malloc1: Seqential cutoff init
00:21:18.558  [2024-12-17 00:52:07.730686] Cache0.Malloc1: Successfully added
00:21:18.558  Cache0
00:21:18.558   00:52:07	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:21:18.558   00:52:07	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:18.913  true
00:21:18.913   00:52:08	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:18.913   00:52:08	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 16'
00:21:19.172  true
00:21:19.172   00:52:08	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:19.172   00:52:08	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 16'
00:21:19.430  true
00:21:19.430   00:52:08	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:21:19.689  [2024-12-17 00:52:08.755456] Cache0: Flushing cache
00:21:19.689  [2024-12-17 00:52:08.755494] Cache0: Flushing cache completed
00:21:19.689  [2024-12-17 00:52:08.755973] Cache0.Malloc1: Removing core
00:21:19.689  [2024-12-17 00:52:08.788360] Cache0: Core Malloc1 successfully removed
00:21:19.689  [2024-12-17 00:52:08.788417] Cache0: Stopping cache
00:21:19.689  [2024-12-17 00:52:08.877569] Cache0: Done saving cache state!
00:21:19.689  [2024-12-17 00:52:08.892898] Cache Cache0 successfully stopped
00:21:19.689   00:52:08	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:21:19.947   00:52:09	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:21:20.205   00:52:09	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:21:20.205   00:52:09	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:21:20.463  Malloc0
00:21:20.463   00:52:09	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:21:20.721  Malloc1
00:21:20.721   00:52:09	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 32
00:21:20.979  [2024-12-17 00:52:10.188396] Inserting cache Cache0
00:21:20.979  [2024-12-17 00:52:10.188817] Cache0: Metadata initialized
00:21:20.979  [2024-12-17 00:52:10.189260] Cache0: Successfully added
00:21:20.979  [2024-12-17 00:52:10.189269] Cache0: Cache mode : wt
00:21:20.979  [2024-12-17 00:52:10.198035] Cache0: Super block config offset : 0 kiB
00:21:20.979  [2024-12-17 00:52:10.198057] Cache0: Super block config size : 2200 B
00:21:20.979  [2024-12-17 00:52:10.198065] Cache0: Super block runtime offset : 128 kiB
00:21:20.979  [2024-12-17 00:52:10.198071] Cache0: Super block runtime size : 4 B
00:21:20.979  [2024-12-17 00:52:10.198078] Cache0: Reserved offset : 256 kiB
00:21:20.979  [2024-12-17 00:52:10.198084] Cache0: Reserved size : 128 kiB
00:21:20.979  [2024-12-17 00:52:10.198091] Cache0: Part config offset : 384 kiB
00:21:20.979  [2024-12-17 00:52:10.198097] Cache0: Part config size : 48 kiB
00:21:20.979  [2024-12-17 00:52:10.198104] Cache0: Part runtime offset : 640 kiB
00:21:20.979  [2024-12-17 00:52:10.198110] Cache0: Part runtime size : 72 kiB
00:21:20.979  [2024-12-17 00:52:10.198117] Cache0: Core config offset : 768 kiB
00:21:20.979  [2024-12-17 00:52:10.198123] Cache0: Core config size : 512 kiB
00:21:20.979  [2024-12-17 00:52:10.198129] Cache0: Core runtime offset : 1792 kiB
00:21:20.979  [2024-12-17 00:52:10.198136] Cache0: Core runtime size : 1172 kiB
00:21:20.979  [2024-12-17 00:52:10.198142] Cache0: Core UUID offset : 3072 kiB
00:21:20.979  [2024-12-17 00:52:10.198149] Cache0: Core UUID size : 16384 kiB
00:21:20.979  [2024-12-17 00:52:10.198155] Cache0: Cleaning offset : 35840 kiB
00:21:20.979  [2024-12-17 00:52:10.198161] Cache0: Cleaning size : 28 kiB
00:21:20.979  [2024-12-17 00:52:10.198168] Cache0: LRU list offset : 35968 kiB
00:21:20.979  [2024-12-17 00:52:10.198181] Cache0: LRU list size : 20 kiB
00:21:20.979  [2024-12-17 00:52:10.198188] Cache0: Collision offset : 36096 kiB
00:21:20.979  [2024-12-17 00:52:10.198194] Cache0: Collision size : 56 kiB
00:21:20.979  [2024-12-17 00:52:10.198201] Cache0: List info offset : 36224 kiB
00:21:20.979  [2024-12-17 00:52:10.198207] Cache0: List info size : 20 kiB
00:21:20.979  [2024-12-17 00:52:10.198214] Cache0: Hash offset : 36352 kiB
00:21:20.979  [2024-12-17 00:52:10.198220] Cache0: Hash size : 4 kiB
00:21:20.979  [2024-12-17 00:52:10.198227] Cache0: Cache line size: 32 kiB
00:21:20.979  [2024-12-17 00:52:10.198236] Cache0: Metadata capacity: 18 MiB
00:21:20.979  [2024-12-17 00:52:10.206577] Cache0: Policy 'always' initialized successfully
00:21:21.237  [2024-12-17 00:52:10.292534] Cache0: Done saving cache state!
00:21:21.237  [2024-12-17 00:52:10.323577] Cache0: Cache attached
00:21:21.237  [2024-12-17 00:52:10.323673] Cache0: Successfully attached
00:21:21.237  [2024-12-17 00:52:10.323976] Cache0: Inserting core Malloc1
00:21:21.237  [2024-12-17 00:52:10.324002] Cache0.Malloc1: Seqential cutoff init
00:21:21.237  [2024-12-17 00:52:10.354818] Cache0.Malloc1: Successfully added
00:21:21.237  Cache0
00:21:21.237   00:52:10	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:21.237   00:52:10	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:21:21.495  true
00:21:21.495   00:52:10	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 32'
00:21:21.495   00:52:10	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:21.753  true
00:21:21.753   00:52:10	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:21.753   00:52:10	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 32'
00:21:22.011  true
00:21:22.011   00:52:11	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:21:22.269  [2024-12-17 00:52:11.343686] Cache0: Flushing cache
00:21:22.269  [2024-12-17 00:52:11.343725] Cache0: Flushing cache completed
00:21:22.269  [2024-12-17 00:52:11.344106] Cache0.Malloc1: Removing core
00:21:22.269  [2024-12-17 00:52:11.377239] Cache0: Core Malloc1 successfully removed
00:21:22.269  [2024-12-17 00:52:11.377298] Cache0: Stopping cache
00:21:22.269  [2024-12-17 00:52:11.462671] Cache0: Done saving cache state!
00:21:22.269  [2024-12-17 00:52:11.478431] Cache Cache0 successfully stopped
00:21:22.269   00:52:11	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:21:22.527   00:52:11	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:21:22.785   00:52:12	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:21:22.785   00:52:12	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:21:23.042  Malloc0
00:21:23.042   00:52:12	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:21:23.300  Malloc1
00:21:23.301   00:52:12	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 64
00:21:23.558  [2024-12-17 00:52:12.733607] Inserting cache Cache0
00:21:23.558  [2024-12-17 00:52:12.734042] Cache0: Metadata initialized
00:21:23.558  [2024-12-17 00:52:12.734482] Cache0: Successfully added
00:21:23.558  [2024-12-17 00:52:12.734491] Cache0: Cache mode : wt
00:21:23.558  [2024-12-17 00:52:12.743343] Cache0: Super block config offset : 0 kiB
00:21:23.558  [2024-12-17 00:52:12.743366] Cache0: Super block config size : 2200 B
00:21:23.558  [2024-12-17 00:52:12.743373] Cache0: Super block runtime offset : 128 kiB
00:21:23.558  [2024-12-17 00:52:12.743380] Cache0: Super block runtime size : 4 B
00:21:23.558  [2024-12-17 00:52:12.743386] Cache0: Reserved offset : 256 kiB
00:21:23.558  [2024-12-17 00:52:12.743393] Cache0: Reserved size : 128 kiB
00:21:23.558  [2024-12-17 00:52:12.743399] Cache0: Part config offset : 384 kiB
00:21:23.558  [2024-12-17 00:52:12.743406] Cache0: Part config size : 48 kiB
00:21:23.558  [2024-12-17 00:52:12.743413] Cache0: Part runtime offset : 640 kiB
00:21:23.558  [2024-12-17 00:52:12.743426] Cache0: Part runtime size : 72 kiB
00:21:23.558  [2024-12-17 00:52:12.743433] Cache0: Core config offset : 768 kiB
00:21:23.558  [2024-12-17 00:52:12.743439] Cache0: Core config size : 512 kiB
00:21:23.558  [2024-12-17 00:52:12.743446] Cache0: Core runtime offset : 1792 kiB
00:21:23.558  [2024-12-17 00:52:12.743452] Cache0: Core runtime size : 1172 kiB
00:21:23.558  [2024-12-17 00:52:12.743459] Cache0: Core UUID offset : 3072 kiB
00:21:23.558  [2024-12-17 00:52:12.743465] Cache0: Core UUID size : 16384 kiB
00:21:23.558  [2024-12-17 00:52:12.743472] Cache0: Cleaning offset : 35840 kiB
00:21:23.558  [2024-12-17 00:52:12.743478] Cache0: Cleaning size : 16 kiB
00:21:23.558  [2024-12-17 00:52:12.743485] Cache0: LRU list offset : 35968 kiB
00:21:23.558  [2024-12-17 00:52:12.743491] Cache0: LRU list size : 12 kiB
00:21:23.558  [2024-12-17 00:52:12.743498] Cache0: Collision offset : 36096 kiB
00:21:23.558  [2024-12-17 00:52:12.743504] Cache0: Collision size : 44 kiB
00:21:23.558  [2024-12-17 00:52:12.743511] Cache0: List info offset : 36224 kiB
00:21:23.558  [2024-12-17 00:52:12.743517] Cache0: List info size : 12 kiB
00:21:23.558  [2024-12-17 00:52:12.743524] Cache0: Hash offset : 36352 kiB
00:21:23.558  [2024-12-17 00:52:12.743531] Cache0: Hash size : 4 kiB
00:21:23.558  [2024-12-17 00:52:12.743538] Cache0: Cache line size: 64 kiB
00:21:23.558  [2024-12-17 00:52:12.743547] Cache0: Metadata capacity: 18 MiB
00:21:23.558  [2024-12-17 00:52:12.752020] Cache0: Policy 'always' initialized successfully
00:21:23.815  [2024-12-17 00:52:12.836716] Cache0: Done saving cache state!
00:21:23.815  [2024-12-17 00:52:12.868533] Cache0: Cache attached
00:21:23.815  [2024-12-17 00:52:12.868630] Cache0: Successfully attached
00:21:23.815  [2024-12-17 00:52:12.868929] Cache0: Inserting core Malloc1
00:21:23.815  [2024-12-17 00:52:12.868957] Cache0.Malloc1: Seqential cutoff init
00:21:23.815  [2024-12-17 00:52:12.900959] Cache0.Malloc1: Successfully added
00:21:23.815  Cache0
00:21:23.815   00:52:12	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:23.815   00:52:12	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:21:24.073  true
00:21:24.073   00:52:13	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:24.073   00:52:13	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 64'
00:21:24.331  true
00:21:24.331   00:52:13	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:24.331   00:52:13	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 64'
00:21:24.589  true
00:21:24.589   00:52:13	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:21:24.847  [2024-12-17 00:52:13.893696] Cache0: Flushing cache
00:21:24.847  [2024-12-17 00:52:13.893732] Cache0: Flushing cache completed
00:21:24.847  [2024-12-17 00:52:13.894104] Cache0.Malloc1: Removing core
00:21:24.847  [2024-12-17 00:52:13.926439] Cache0: Core Malloc1 successfully removed
00:21:24.847  [2024-12-17 00:52:13.926497] Cache0: Stopping cache
00:21:24.847  [2024-12-17 00:52:14.009177] Cache0: Done saving cache state!
00:21:24.847  [2024-12-17 00:52:14.023577] Cache Cache0 successfully stopped
00:21:24.847   00:52:14	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:21:25.105   00:52:14	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:21:25.363   00:52:14	-- management/configuration-change.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:21:25.621  Malloc0
00:21:25.621   00:52:14	-- management/configuration-change.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:21:25.879  Malloc1
00:21:25.879   00:52:15	-- management/configuration-change.sh@42 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1
00:21:26.136  [2024-12-17 00:52:15.322775] Inserting cache Cache0
00:21:26.136  [2024-12-17 00:52:15.323259] Cache0: Metadata initialized
00:21:26.136  [2024-12-17 00:52:15.323702] Cache0: Successfully added
00:21:26.136  [2024-12-17 00:52:15.323710] Cache0: Cache mode : wt
00:21:26.136  [2024-12-17 00:52:15.333436] Cache0: Super block config offset : 0 kiB
00:21:26.136  [2024-12-17 00:52:15.333466] Cache0: Super block config size : 2200 B
00:21:26.136  [2024-12-17 00:52:15.333473] Cache0: Super block runtime offset : 128 kiB
00:21:26.136  [2024-12-17 00:52:15.333480] Cache0: Super block runtime size : 4 B
00:21:26.136  [2024-12-17 00:52:15.333487] Cache0: Reserved offset : 256 kiB
00:21:26.136  [2024-12-17 00:52:15.333493] Cache0: Reserved size : 128 kiB
00:21:26.136  [2024-12-17 00:52:15.333500] Cache0: Part config offset : 384 kiB
00:21:26.136  [2024-12-17 00:52:15.333506] Cache0: Part config size : 48 kiB
00:21:26.136  [2024-12-17 00:52:15.333513] Cache0: Part runtime offset : 640 kiB
00:21:26.136  [2024-12-17 00:52:15.333519] Cache0: Part runtime size : 72 kiB
00:21:26.136  [2024-12-17 00:52:15.333526] Cache0: Core config offset : 768 kiB
00:21:26.136  [2024-12-17 00:52:15.333532] Cache0: Core config size : 512 kiB
00:21:26.136  [2024-12-17 00:52:15.333539] Cache0: Core runtime offset : 1792 kiB
00:21:26.136  [2024-12-17 00:52:15.333545] Cache0: Core runtime size : 1172 kiB
00:21:26.136  [2024-12-17 00:52:15.333552] Cache0: Core UUID offset : 3072 kiB
00:21:26.136  [2024-12-17 00:52:15.333558] Cache0: Core UUID size : 16384 kiB
00:21:26.136  [2024-12-17 00:52:15.333565] Cache0: Cleaning offset : 35840 kiB
00:21:26.136  [2024-12-17 00:52:15.333571] Cache0: Cleaning size : 196 kiB
00:21:26.136  [2024-12-17 00:52:15.333578] Cache0: LRU list offset : 36096 kiB
00:21:26.136  [2024-12-17 00:52:15.333584] Cache0: LRU list size : 148 kiB
00:21:26.136  [2024-12-17 00:52:15.333591] Cache0: Collision offset : 36352 kiB
00:21:26.137  [2024-12-17 00:52:15.333597] Cache0: Collision size : 196 kiB
00:21:26.137  [2024-12-17 00:52:15.333603] Cache0: List info offset : 36608 kiB
00:21:26.137  [2024-12-17 00:52:15.333610] Cache0: List info size : 148 kiB
00:21:26.137  [2024-12-17 00:52:15.333617] Cache0: Hash offset : 36864 kiB
00:21:26.137  [2024-12-17 00:52:15.333623] Cache0: Hash size : 20 kiB
00:21:26.137  [2024-12-17 00:52:15.333631] Cache0: Cache line size: 4 kiB
00:21:26.137  [2024-12-17 00:52:15.333640] Cache0: Metadata capacity: 18 MiB
00:21:26.137  [2024-12-17 00:52:15.342891] Cache0: Policy 'always' initialized successfully
00:21:26.393  [2024-12-17 00:52:15.457086] Cache0: Done saving cache state!
00:21:26.393  [2024-12-17 00:52:15.488800] Cache0: Cache attached
00:21:26.393  [2024-12-17 00:52:15.488900] Cache0: Successfully attached
00:21:26.393  [2024-12-17 00:52:15.489186] Cache0: Inserting core Malloc1
00:21:26.393  [2024-12-17 00:52:15.489212] Cache0.Malloc1: Seqential cutoff init
00:21:26.393  [2024-12-17 00:52:15.520949] Cache0.Malloc1: Successfully added
00:21:26.393  Cache0
00:21:26.393   00:52:15	-- management/configuration-change.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:21:26.393   00:52:15	-- management/configuration-change.sh@44 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:21:26.651  true
00:21:26.651   00:52:15	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:21:26.651   00:52:15	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wt
00:21:26.909  [2024-12-17 00:52:16.024049] Cache0: Cache mode 'Write Through' is already set
00:21:26.909  wt
00:21:26.909   00:52:16	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:26.909   00:52:16	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wt"'
00:21:27.167  true
00:21:27.167   00:52:16	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wt"'
00:21:27.167   00:52:16	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:27.425  true
00:21:27.425   00:52:16	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:21:27.425   00:52:16	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wb
00:21:27.683  [2024-12-17 00:52:16.758241] Cache0: Changing cache mode from 'Write Through' to 'Write Back' successful
00:21:27.683  wb
00:21:27.683   00:52:16	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:27.683   00:52:16	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wb"'
00:21:27.940  true
00:21:27.940   00:52:17	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:27.940   00:52:17	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wb"'
00:21:28.198  true
00:21:28.198   00:52:17	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:21:28.198   00:52:17	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 pt
00:21:28.456  [2024-12-17 00:52:17.504448] Cache0: Changing cache mode from 'Write Back' to 'Pass Through' successful
00:21:28.456  pt
00:21:28.456   00:52:17	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "pt"'
00:21:28.456   00:52:17	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:28.714  true
00:21:28.714   00:52:17	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:28.714   00:52:17	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "pt"'
00:21:28.975  true
00:21:28.975   00:52:18	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:21:28.975   00:52:18	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wa
00:21:29.233  [2024-12-17 00:52:18.242453] Cache0: Changing cache mode from 'Pass Through' to 'Write Around' successful
00:21:29.233  wa
00:21:29.233   00:52:18	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:29.233   00:52:18	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wa"'
00:21:29.492  true
00:21:29.492   00:52:18	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:29.492   00:52:18	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wa"'
00:21:29.750  true
00:21:29.750   00:52:18	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:21:29.750   00:52:18	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wi
00:21:29.750  [2024-12-17 00:52:18.992596] Cache0: Changing cache mode from 'Write Around' to 'Write Invalidate' successful
00:21:29.750  wi
00:21:30.008   00:52:19	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:30.008   00:52:19	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wi"'
00:21:30.008  true
00:21:30.266   00:52:19	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:30.267   00:52:19	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wi"'
00:21:30.267  true
00:21:30.267   00:52:19	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:21:30.267   00:52:19	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wo
00:21:30.525  [2024-12-17 00:52:19.734727] Cache0: Changing cache mode from 'Write Invalidate' to 'Write Only' successful
00:21:30.525  wo
00:21:30.525   00:52:19	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:21:30.525   00:52:19	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wo"'
00:21:30.783  true
00:21:30.783   00:52:19	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:30.783   00:52:19	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wo"'
00:21:31.042  true
00:21:31.042   00:52:20	-- management/configuration-change.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p always -t 64
00:21:31.300  [2024-12-17 00:52:20.472876] Cache0.Malloc1: Changing sequential cutoff policy from full to always
00:21:31.300  [2024-12-17 00:52:20.472956] Cache0.Malloc1: Changing sequential cutoff threshold from 1024 to 65536 bytes successful
00:21:31.300   00:52:20	-- management/configuration-change.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p never -t 16
00:21:31.558  [2024-12-17 00:52:20.717573] Cache0.Malloc1: Changing sequential cutoff policy from always to never
00:21:31.558  [2024-12-17 00:52:20.717644] Cache0.Malloc1: Changing sequential cutoff threshold from 65536 to 16384 bytes successful
00:21:31.558   00:52:20	-- management/configuration-change.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:21:31.558   00:52:20	-- management/configuration-change.sh@63 -- # killprocess 1067690
00:21:31.558   00:52:20	-- common/autotest_common.sh@936 -- # '[' -z 1067690 ']'
00:21:31.558   00:52:20	-- common/autotest_common.sh@940 -- # kill -0 1067690
00:21:31.558    00:52:20	-- common/autotest_common.sh@941 -- # uname
00:21:31.558   00:52:20	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:31.558    00:52:20	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1067690
00:21:31.558   00:52:20	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:21:31.558   00:52:20	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:21:31.558   00:52:20	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1067690'
00:21:31.558  killing process with pid 1067690
00:21:31.558   00:52:20	-- common/autotest_common.sh@955 -- # kill 1067690
00:21:31.558   00:52:20	-- common/autotest_common.sh@960 -- # wait 1067690
00:21:31.817  [2024-12-17 00:52:20.952019] Cache0: Flushing cache
00:21:31.817  [2024-12-17 00:52:20.952070] Cache0: Flushing cache completed
00:21:31.817  [2024-12-17 00:52:20.952121] Cache0: Stopping cache
00:21:31.817  [2024-12-17 00:52:21.064307] Cache0: Done saving cache state!
00:21:32.075  [2024-12-17 00:52:21.079540] Cache Cache0 successfully stopped
00:21:32.333  
00:21:32.333  real	0m21.141s
00:21:32.333  user	0m35.825s
00:21:32.333  sys	0m3.492s
00:21:32.333   00:52:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:32.333   00:52:21	-- common/autotest_common.sh@10 -- # set +x
00:21:32.333  ************************************
00:21:32.333  END TEST ocf_configuration_change
00:21:32.333  ************************************
00:21:32.333  
00:21:32.333  real	1m45.588s
00:21:32.333  user	2m45.301s
00:21:32.333  sys	0m18.329s
00:21:32.333   00:52:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:32.333   00:52:21	-- common/autotest_common.sh@10 -- # set +x
00:21:32.333  ************************************
00:21:32.333  END TEST ocf
00:21:32.333  ************************************
00:21:32.333   00:52:21	-- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']'
00:21:32.333   00:52:21	-- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:21:32.333   00:52:21	-- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']'
00:21:32.333   00:52:21	-- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:21:32.333   00:52:21	-- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:21:32.333   00:52:21	-- spdk/autotest.sh@353 -- # [[ 1 -eq 1 ]]
00:21:32.333   00:52:21	-- spdk/autotest.sh@354 -- # run_test scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh
00:21:32.333   00:52:21	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:32.333   00:52:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:32.333   00:52:21	-- common/autotest_common.sh@10 -- # set +x
00:21:32.333  ************************************
00:21:32.333  START TEST scheduler
00:21:32.333  ************************************
00:21:32.333   00:52:21	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh
00:21:32.592  * Looking for test storage...
00:21:32.592  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:21:32.592    00:52:21	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:32.592     00:52:21	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:32.592     00:52:21	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:32.592    00:52:21	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:32.592    00:52:21	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:32.592    00:52:21	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:32.592    00:52:21	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:32.592    00:52:21	-- scripts/common.sh@335 -- # IFS=.-:
00:21:32.592    00:52:21	-- scripts/common.sh@335 -- # read -ra ver1
00:21:32.592    00:52:21	-- scripts/common.sh@336 -- # IFS=.-:
00:21:32.592    00:52:21	-- scripts/common.sh@336 -- # read -ra ver2
00:21:32.592    00:52:21	-- scripts/common.sh@337 -- # local 'op=<'
00:21:32.592    00:52:21	-- scripts/common.sh@339 -- # ver1_l=2
00:21:32.592    00:52:21	-- scripts/common.sh@340 -- # ver2_l=1
00:21:32.592    00:52:21	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:32.592    00:52:21	-- scripts/common.sh@343 -- # case "$op" in
00:21:32.592    00:52:21	-- scripts/common.sh@344 -- # : 1
00:21:32.592    00:52:21	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:32.592    00:52:21	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:32.592     00:52:21	-- scripts/common.sh@364 -- # decimal 1
00:21:32.592     00:52:21	-- scripts/common.sh@352 -- # local d=1
00:21:32.592     00:52:21	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:32.592     00:52:21	-- scripts/common.sh@354 -- # echo 1
00:21:32.592    00:52:21	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:32.592     00:52:21	-- scripts/common.sh@365 -- # decimal 2
00:21:32.592     00:52:21	-- scripts/common.sh@352 -- # local d=2
00:21:32.592     00:52:21	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:32.592     00:52:21	-- scripts/common.sh@354 -- # echo 2
00:21:32.592    00:52:21	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:32.592    00:52:21	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:32.592    00:52:21	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:32.592    00:52:21	-- scripts/common.sh@367 -- # return 0
00:21:32.592    00:52:21	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:32.592    00:52:21	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:32.592  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:32.592  		--rc genhtml_branch_coverage=1
00:21:32.592  		--rc genhtml_function_coverage=1
00:21:32.592  		--rc genhtml_legend=1
00:21:32.592  		--rc geninfo_all_blocks=1
00:21:32.592  		--rc geninfo_unexecuted_blocks=1
00:21:32.592  		
00:21:32.592  		'
00:21:32.592    00:52:21	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:32.592  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:32.592  		--rc genhtml_branch_coverage=1
00:21:32.592  		--rc genhtml_function_coverage=1
00:21:32.592  		--rc genhtml_legend=1
00:21:32.592  		--rc geninfo_all_blocks=1
00:21:32.592  		--rc geninfo_unexecuted_blocks=1
00:21:32.592  		
00:21:32.592  		'
00:21:32.592    00:52:21	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:32.592  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:32.592  		--rc genhtml_branch_coverage=1
00:21:32.592  		--rc genhtml_function_coverage=1
00:21:32.592  		--rc genhtml_legend=1
00:21:32.592  		--rc geninfo_all_blocks=1
00:21:32.592  		--rc geninfo_unexecuted_blocks=1
00:21:32.592  		
00:21:32.592  		'
00:21:32.592    00:52:21	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:32.592  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:32.592  		--rc genhtml_branch_coverage=1
00:21:32.592  		--rc genhtml_function_coverage=1
00:21:32.592  		--rc genhtml_legend=1
00:21:32.592  		--rc geninfo_all_blocks=1
00:21:32.592  		--rc geninfo_unexecuted_blocks=1
00:21:32.592  		
00:21:32.592  		'
00:21:32.592   00:52:21	-- scheduler/scheduler.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/isolate_cores.sh
00:21:32.592    00:52:21	-- scheduler/isolate_cores.sh@6 -- # xtrace_disable
00:21:32.592    00:52:21	-- common/autotest_common.sh@10 -- # set +x
00:21:32.850  Moving 1070502 (PF_SUPERPRIV,PF_RANDOMIZE) to / from N/A
00:21:32.850  Moving 1070502 (PF_SUPERPRIV,PF_RANDOMIZE) to /cpuset from N/A
00:21:32.850   00:52:21	-- scheduler/scheduler.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:21:34.225  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:21:34.225  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:21:34.225  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:21:34.225   00:52:23	-- scheduler/scheduler.sh@14 -- # run_test idle /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh
00:21:34.225   00:52:23	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:34.225   00:52:23	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:34.225   00:52:23	-- common/autotest_common.sh@10 -- # set +x
00:21:34.225  ************************************
00:21:34.225  START TEST idle
00:21:34.225  ************************************
00:21:34.225   00:52:23	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh
00:21:34.225  * Looking for test storage...
00:21:34.225  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:21:34.225    00:52:23	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:34.225     00:52:23	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:34.225     00:52:23	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:34.225    00:52:23	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:34.225    00:52:23	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:34.225    00:52:23	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:34.225    00:52:23	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:34.225    00:52:23	-- scripts/common.sh@335 -- # IFS=.-:
00:21:34.225    00:52:23	-- scripts/common.sh@335 -- # read -ra ver1
00:21:34.225    00:52:23	-- scripts/common.sh@336 -- # IFS=.-:
00:21:34.225    00:52:23	-- scripts/common.sh@336 -- # read -ra ver2
00:21:34.225    00:52:23	-- scripts/common.sh@337 -- # local 'op=<'
00:21:34.225    00:52:23	-- scripts/common.sh@339 -- # ver1_l=2
00:21:34.225    00:52:23	-- scripts/common.sh@340 -- # ver2_l=1
00:21:34.225    00:52:23	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:34.225    00:52:23	-- scripts/common.sh@343 -- # case "$op" in
00:21:34.225    00:52:23	-- scripts/common.sh@344 -- # : 1
00:21:34.225    00:52:23	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:34.225    00:52:23	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:34.225     00:52:23	-- scripts/common.sh@364 -- # decimal 1
00:21:34.225     00:52:23	-- scripts/common.sh@352 -- # local d=1
00:21:34.225     00:52:23	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:34.225     00:52:23	-- scripts/common.sh@354 -- # echo 1
00:21:34.225    00:52:23	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:34.225     00:52:23	-- scripts/common.sh@365 -- # decimal 2
00:21:34.226     00:52:23	-- scripts/common.sh@352 -- # local d=2
00:21:34.226     00:52:23	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:34.226     00:52:23	-- scripts/common.sh@354 -- # echo 2
00:21:34.226    00:52:23	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:34.226    00:52:23	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:34.226    00:52:23	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:34.226    00:52:23	-- scripts/common.sh@367 -- # return 0
00:21:34.226    00:52:23	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:34.226    00:52:23	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:34.226  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.226  		--rc genhtml_branch_coverage=1
00:21:34.226  		--rc genhtml_function_coverage=1
00:21:34.226  		--rc genhtml_legend=1
00:21:34.226  		--rc geninfo_all_blocks=1
00:21:34.226  		--rc geninfo_unexecuted_blocks=1
00:21:34.226  		
00:21:34.226  		'
00:21:34.226    00:52:23	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:34.226  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.226  		--rc genhtml_branch_coverage=1
00:21:34.226  		--rc genhtml_function_coverage=1
00:21:34.226  		--rc genhtml_legend=1
00:21:34.226  		--rc geninfo_all_blocks=1
00:21:34.226  		--rc geninfo_unexecuted_blocks=1
00:21:34.226  		
00:21:34.226  		'
00:21:34.226    00:52:23	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:34.226  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.226  		--rc genhtml_branch_coverage=1
00:21:34.226  		--rc genhtml_function_coverage=1
00:21:34.226  		--rc genhtml_legend=1
00:21:34.226  		--rc geninfo_all_blocks=1
00:21:34.226  		--rc geninfo_unexecuted_blocks=1
00:21:34.226  		
00:21:34.226  		'
00:21:34.226    00:52:23	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:34.226  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:34.226  		--rc genhtml_branch_coverage=1
00:21:34.226  		--rc genhtml_function_coverage=1
00:21:34.226  		--rc genhtml_legend=1
00:21:34.226  		--rc geninfo_all_blocks=1
00:21:34.226  		--rc geninfo_unexecuted_blocks=1
00:21:34.226  		
00:21:34.226  		'
00:21:34.226   00:52:23	-- scheduler/idle.sh@11 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh
00:21:34.226    00:52:23	-- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:21:34.226    00:52:23	-- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:21:34.226    00:52:23	-- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:21:34.226    00:52:23	-- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler
00:21:34.226    00:52:23	-- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:21:34.226    00:52:23	-- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh
00:21:34.226     00:52:23	-- scheduler/cgroups.sh@245 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:21:34.226      00:52:23	-- scheduler/cgroups.sh@246 -- # check_cgroup
00:21:34.226      00:52:23	-- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:21:34.226      00:52:23	-- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:21:34.226      00:52:23	-- scheduler/cgroups.sh@10 -- # echo 2
00:21:34.226     00:52:23	-- scheduler/cgroups.sh@246 -- # cgroup_version=2
00:21:34.226   00:52:23	-- scheduler/idle.sh@13 -- # trap 'killprocess "$spdk_pid"' EXIT
00:21:34.226   00:52:23	-- scheduler/idle.sh@71 -- # idle
00:21:34.226   00:52:23	-- scheduler/idle.sh@36 -- # local reactor_framework
00:21:34.226   00:52:23	-- scheduler/idle.sh@37 -- # local reactors thread
00:21:34.226   00:52:23	-- scheduler/idle.sh@38 -- # local thread_cpumask
00:21:34.226   00:52:23	-- scheduler/idle.sh@39 -- # local threads
00:21:34.226   00:52:23	-- scheduler/idle.sh@41 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1
00:21:34.226   00:52:23	-- scheduler/common.sh@405 -- # [[ -e /proc//status ]]
00:21:34.226   00:52:23	-- scheduler/common.sh@409 -- # spdk_pid=1071679
00:21:34.226   00:52:23	-- scheduler/common.sh@408 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc
00:21:34.226   00:52:23	-- scheduler/common.sh@411 -- # waitforlisten 1071679
00:21:34.226   00:52:23	-- common/autotest_common.sh@829 -- # '[' -z 1071679 ']'
00:21:34.226   00:52:23	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:34.226   00:52:23	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:34.226   00:52:23	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:34.226  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:34.226   00:52:23	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:34.226   00:52:23	-- common/autotest_common.sh@10 -- # set +x
00:21:34.226  [2024-12-17 00:52:23.455384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:34.226  [2024-12-17 00:52:23.455443] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071679 ]
00:21:34.485  EAL: No free 2048 kB hugepages reported on node 1
00:21:34.485  [2024-12-17 00:52:23.557682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8
00:21:34.485  [2024-12-17 00:52:23.635581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:21:34.485  [2024-12-17 00:52:23.635853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:21:34.485  [2024-12-17 00:52:23.635953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:21:34.485  [2024-12-17 00:52:23.636034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:21:34.485  [2024-12-17 00:52:23.636060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37
00:21:34.485  [2024-12-17 00:52:23.636119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38
00:21:34.485  [2024-12-17 00:52:23.636190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40
00:21:34.485  [2024-12-17 00:52:23.636161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39
00:21:34.485  [2024-12-17 00:52:23.636194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:21:34.744   00:52:23	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:34.744   00:52:23	-- common/autotest_common.sh@862 -- # return 0
00:21:34.744   00:52:23	-- scheduler/common.sh@412 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic
00:21:35.312  POWER: Env isn't set yet!
00:21:35.312  POWER: Attempting to initialise ACPI cpufreq power management...
00:21:35.312  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:21:35.312  POWER: Cannot set governor of lcore 1 to userspace
00:21:35.312  POWER: Attempting to initialise PSTAT power management...
00:21:35.312  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 1 power management
00:21:35.312  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 2 power management
00:21:35.312  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 3 power management
00:21:35.312  POWER: Power management governor of lcore 4 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 4 power management
00:21:35.312  POWER: Power management governor of lcore 37 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 37 power management
00:21:35.312  POWER: Power management governor of lcore 38 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 38 power management
00:21:35.312  POWER: Power management governor of lcore 39 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 39 power management
00:21:35.312  POWER: Power management governor of lcore 40 has been set to 'performance' successfully
00:21:35.312  POWER: Initialized successfully for lcore 40 power management
00:21:35.312  [2024-12-17 00:52:24.540131] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:21:35.312  [2024-12-17 00:52:24.540158] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:21:35.312  [2024-12-17 00:52:24.540175] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:21:35.312   00:52:24	-- scheduler/common.sh@413 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:21:35.881  [2024-12-17 00:52:24.949696] 'OCF_Core' volume operations registered
00:21:35.881  [2024-12-17 00:52:24.951941] 'OCF_Cache' volume operations registered
00:21:35.881  [2024-12-17 00:52:24.954577] 'OCF Composite' volume operations registered
00:21:35.881  [2024-12-17 00:52:24.956807] 'SPDK_block_device' volume operations registered
00:21:35.881   00:52:25	-- scheduler/idle.sh@48 -- # get_thread_stats_current
00:21:35.881   00:52:25	-- scheduler/common.sh@418 -- # xtrace_disable
00:21:35.881   00:52:25	-- common/autotest_common.sh@10 -- # set +x
00:21:37.784   00:52:26	-- scheduler/idle.sh@50 -- # xtrace_disable
00:21:37.784   00:52:26	-- common/autotest_common.sh@10 -- # set +x
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:21:37.784  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:21:38.043  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:21:38.302  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:21:38.302  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:21:40.208  [load:  2%, idle: 327572010, busy:   8755984] app_thread is idle
00:21:40.208  [load:  0%, idle: 300933858, busy:    219334] nvmf_tgt_poll_group_0 is idle
00:21:40.208  [load:  0%, idle: 300884806, busy:    218068] nvmf_tgt_poll_group_1 is idle
00:21:40.208  [load:  0%, idle: 300700340, busy:    217774] nvmf_tgt_poll_group_2 is idle
00:21:40.208  [load:  0%, idle: 300828984, busy:    229586] nvmf_tgt_poll_group_3 is idle
00:21:40.208  [load:  0%, idle: 301634358, busy:    217766] nvmf_tgt_poll_group_4 is idle
00:21:40.208  [load:  0%, idle: 301280512, busy:    217776] nvmf_tgt_poll_group_5 is idle
00:21:40.208  [load:  0%, idle: 300724652, busy:    218116] nvmf_tgt_poll_group_6 is idle
00:21:40.208  [load:  0%, idle: 302117226, busy:    217670] nvmf_tgt_poll_group_7 is idle
00:21:40.208  [load:  0%, idle: 306587094, busy:    218772] iscsi_poll_group_1 is idle
00:21:40.208  [load:  0%, idle: 306356842, busy:    218234] iscsi_poll_group_2 is idle
00:21:40.208  [load:  0%, idle: 306157396, busy:    217986] iscsi_poll_group_3 is idle
00:21:40.208  [load:  0%, idle: 305832926, busy:    219006] iscsi_poll_group_4 is idle
00:21:40.208  [load:  0%, idle: 305677776, busy:    225550] iscsi_poll_group_37 is idle
00:21:40.208  [load:  0%, idle: 308174406, busy:    239626] iscsi_poll_group_38 is idle
00:21:40.208  [load:  0%, idle: 305718026, busy:    225212] iscsi_poll_group_39 is idle
00:21:40.208  [load:  0%, idle: 305972040, busy:    226080] iscsi_poll_group_40 is idle
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:21:40.208  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:21:40.467  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:21:40.467  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:21:40.467  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:21:40.467  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:21:40.467  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:21:40.467  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:21:42.372  [load:  2%, idle: 329890530, busy:   8811546] app_thread is idle
00:21:42.372  [load:  0%, idle: 303237468, busy:    217854] nvmf_tgt_poll_group_0 is idle
00:21:42.372  [load:  0%, idle: 303266746, busy:    237278] nvmf_tgt_poll_group_1 is idle
00:21:42.372  [load:  0%, idle: 303160540, busy:    217930] nvmf_tgt_poll_group_2 is idle
00:21:42.372  [load:  0%, idle: 302859344, busy:    217766] nvmf_tgt_poll_group_3 is idle
00:21:42.372  [load:  0%, idle: 303697354, busy:    217620] nvmf_tgt_poll_group_4 is idle
00:21:42.372  [load:  0%, idle: 303189504, busy:    217666] nvmf_tgt_poll_group_5 is idle
00:21:42.372  [load:  0%, idle: 303277834, busy:    217584] nvmf_tgt_poll_group_6 is idle
00:21:42.372  [load:  0%, idle: 304713242, busy:    217998] nvmf_tgt_poll_group_7 is idle
00:21:42.372  [load:  0%, idle: 308936860, busy:    219856] iscsi_poll_group_1 is idle
00:21:42.372  [load:  0%, idle: 308288606, busy:    218396] iscsi_poll_group_2 is idle
00:21:42.372  [load:  0%, idle: 308293406, busy:    218892] iscsi_poll_group_3 is idle
00:21:42.372  [load:  0%, idle: 308211946, busy:    219154] iscsi_poll_group_4 is idle
00:21:42.372  [load:  0%, idle: 307847218, busy:    242010] iscsi_poll_group_37 is idle
00:21:42.372  [load:  0%, idle: 309946928, busy:    225732] iscsi_poll_group_38 is idle
00:21:42.372  [load:  0%, idle: 307831296, busy:    224990] iscsi_poll_group_39 is idle
00:21:42.372  [load:  0%, idle: 307784592, busy:    226004] iscsi_poll_group_40 is idle
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:21:42.372  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:21:42.631  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:21:42.890  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:21:42.890  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:21:44.402  [load:  2%, idle: 332273214, busy:   8916986] app_thread is idle
00:21:44.402  [load:  0%, idle: 305014662, busy:    218154] nvmf_tgt_poll_group_0 is idle
00:21:44.402  [load:  0%, idle: 305826918, busy:    218440] nvmf_tgt_poll_group_1 is idle
00:21:44.402  [load:  0%, idle: 305062826, busy:    218004] nvmf_tgt_poll_group_2 is idle
00:21:44.402  [load:  0%, idle: 305014266, busy:    217246] nvmf_tgt_poll_group_3 is idle
00:21:44.402  [load:  0%, idle: 305372290, busy:    217746] nvmf_tgt_poll_group_4 is idle
00:21:44.402  [load:  0%, idle: 305635744, busy:    217746] nvmf_tgt_poll_group_5 is idle
00:21:44.402  [load:  0%, idle: 305077286, busy:    217588] nvmf_tgt_poll_group_6 is idle
00:21:44.402  [load:  0%, idle: 306303140, busy:    237464] nvmf_tgt_poll_group_7 is idle
00:21:44.402  [load:  0%, idle: 311647440, busy:    219740] iscsi_poll_group_1 is idle
00:21:44.402  [load:  0%, idle: 310413406, busy:    218514] iscsi_poll_group_2 is idle
00:21:44.402  [load:  0%, idle: 310432940, busy:    219414] iscsi_poll_group_3 is idle
00:21:44.402  [load:  0%, idle: 310202334, busy:    219202] iscsi_poll_group_4 is idle
00:21:44.402  [load:  0%, idle: 309880650, busy:    225486] iscsi_poll_group_37 is idle
00:21:44.402  [load:  0%, idle: 311792328, busy:    225712] iscsi_poll_group_38 is idle
00:21:44.402  [load:  0%, idle: 310208488, busy:    225720] iscsi_poll_group_39 is idle
00:21:44.402  [load:  0%, idle: 310001440, busy:    225804] iscsi_poll_group_40 is idle
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:21:44.661  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:21:44.920  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:21:45.179  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:21:47.081  [load:  2%, idle: 326816930, busy:   8801386] app_thread is idle
00:21:47.081  [load:  0%, idle: 299956216, busy:    218370] nvmf_tgt_poll_group_0 is idle
00:21:47.081  [load:  0%, idle: 300059166, busy:    218010] nvmf_tgt_poll_group_1 is idle
00:21:47.081  [load:  0%, idle: 299663506, busy:    217846] nvmf_tgt_poll_group_2 is idle
00:21:47.081  [load:  0%, idle: 300228176, busy:    217276] nvmf_tgt_poll_group_3 is idle
00:21:47.081  [load:  0%, idle: 300003858, busy:    218522] nvmf_tgt_poll_group_4 is idle
00:21:47.081  [load:  0%, idle: 300193398, busy:    217762] nvmf_tgt_poll_group_5 is idle
00:21:47.081  [load:  0%, idle: 299970434, busy:    217618] nvmf_tgt_poll_group_6 is idle
00:21:47.081  [load:  0%, idle: 301236306, busy:    233334] nvmf_tgt_poll_group_7 is idle
00:21:47.081  [load:  0%, idle: 305941812, busy:    223810] iscsi_poll_group_1 is idle
00:21:47.081  [load:  0%, idle: 305224146, busy:    223332] iscsi_poll_group_2 is idle
00:21:47.081  [load:  0%, idle: 305412928, busy:    222970] iscsi_poll_group_3 is idle
00:21:47.081  [load:  0%, idle: 305024864, busy:    224862] iscsi_poll_group_4 is idle
00:21:47.081  [load:  0%, idle: 304714022, busy:    230504] iscsi_poll_group_37 is idle
00:21:47.081  [load:  0%, idle: 307216154, busy:    230360] iscsi_poll_group_38 is idle
00:21:47.081  [load:  0%, idle: 304637050, busy:    230072] iscsi_poll_group_39 is idle
00:21:47.081  [load:  0%, idle: 305038400, busy:    230820] iscsi_poll_group_40 is idle
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:21:47.081  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:21:47.339  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:21:47.339  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:21:47.339  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:21:47.339  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:21:49.246  [load:  2%, idle: 328558638, busy:   8733238] app_thread is idle
00:21:49.246  [load:  0%, idle: 302043246, busy:    217374] nvmf_tgt_poll_group_0 is idle
00:21:49.246  [load:  0%, idle: 301894880, busy:    217552] nvmf_tgt_poll_group_1 is idle
00:21:49.246  [load:  0%, idle: 301718756, busy:    217278] nvmf_tgt_poll_group_2 is idle
00:21:49.246  [load:  0%, idle: 301964566, busy:    216810] nvmf_tgt_poll_group_3 is idle
00:21:49.246  [load:  0%, idle: 302127356, busy:    237146] nvmf_tgt_poll_group_4 is idle
00:21:49.246  [load:  0%, idle: 302072462, busy:    217942] nvmf_tgt_poll_group_5 is idle
00:21:49.246  [load:  0%, idle: 301953424, busy:    217812] nvmf_tgt_poll_group_6 is idle
00:21:49.246  [load:  0%, idle: 303307096, busy:    217684] nvmf_tgt_poll_group_7 is idle
00:21:49.246  [load:  0%, idle: 307831060, busy:    219202] iscsi_poll_group_1 is idle
00:21:49.246  [load:  0%, idle: 307475394, busy:    218440] iscsi_poll_group_2 is idle
00:21:49.246  [load:  0%, idle: 307442702, busy:    218350] iscsi_poll_group_3 is idle
00:21:49.246  [load:  0%, idle: 307070828, busy:    232028] iscsi_poll_group_4 is idle
00:21:49.246  [load:  0%, idle: 307128918, busy:    225960] iscsi_poll_group_37 is idle
00:21:49.246  [load:  0%, idle: 308774784, busy:    225700] iscsi_poll_group_38 is idle
00:21:49.246  [load:  0%, idle: 306958352, busy:    243984] iscsi_poll_group_39 is idle
00:21:49.246  [load:  0%, idle: 306867358, busy:    226754] iscsi_poll_group_40 is idle
00:21:49.246   00:52:38	-- scheduler/idle.sh@1 -- # killprocess 1071679
00:21:49.246   00:52:38	-- common/autotest_common.sh@936 -- # '[' -z 1071679 ']'
00:21:49.246   00:52:38	-- common/autotest_common.sh@940 -- # kill -0 1071679
00:21:49.246    00:52:38	-- common/autotest_common.sh@941 -- # uname
00:21:49.246   00:52:38	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:49.246    00:52:38	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1071679
00:21:49.246   00:52:38	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:21:49.246   00:52:38	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:21:49.246   00:52:38	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1071679'
00:21:49.246  killing process with pid 1071679
00:21:49.246   00:52:38	-- common/autotest_common.sh@955 -- # kill 1071679
00:21:49.246   00:52:38	-- common/autotest_common.sh@960 -- # wait 1071679
00:21:49.246  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 4 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 37 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 38 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 39 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original
00:21:49.246  POWER: Power management governor of lcore 40 has been set to 'powersave' successfully
00:21:49.246  POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original
00:21:49.505  
00:21:49.505  real	0m15.374s
00:21:49.505  user	0m33.628s
00:21:49.505  sys	0m1.676s
00:21:49.505   00:52:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:49.505   00:52:38	-- common/autotest_common.sh@10 -- # set +x
00:21:49.505  ************************************
00:21:49.505  END TEST idle
00:21:49.505  ************************************
00:21:49.505   00:52:38	-- scheduler/scheduler.sh@16 -- # run_test dpdk_governor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh
00:21:49.505   00:52:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:49.505   00:52:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:49.505   00:52:38	-- common/autotest_common.sh@10 -- # set +x
00:21:49.505  ************************************
00:21:49.505  START TEST dpdk_governor
00:21:49.505  ************************************
00:21:49.505   00:52:38	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh
00:21:49.505  * Looking for test storage...
00:21:49.505  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:21:49.505    00:52:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:49.505     00:52:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:49.505     00:52:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:49.768    00:52:38	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:49.768    00:52:38	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:49.768    00:52:38	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:49.768    00:52:38	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:49.768    00:52:38	-- scripts/common.sh@335 -- # IFS=.-:
00:21:49.768    00:52:38	-- scripts/common.sh@335 -- # read -ra ver1
00:21:49.768    00:52:38	-- scripts/common.sh@336 -- # IFS=.-:
00:21:49.768    00:52:38	-- scripts/common.sh@336 -- # read -ra ver2
00:21:49.768    00:52:38	-- scripts/common.sh@337 -- # local 'op=<'
00:21:49.768    00:52:38	-- scripts/common.sh@339 -- # ver1_l=2
00:21:49.768    00:52:38	-- scripts/common.sh@340 -- # ver2_l=1
00:21:49.768    00:52:38	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:49.768    00:52:38	-- scripts/common.sh@343 -- # case "$op" in
00:21:49.768    00:52:38	-- scripts/common.sh@344 -- # : 1
00:21:49.768    00:52:38	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:49.768    00:52:38	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:49.768     00:52:38	-- scripts/common.sh@364 -- # decimal 1
00:21:49.768     00:52:38	-- scripts/common.sh@352 -- # local d=1
00:21:49.768     00:52:38	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:49.768     00:52:38	-- scripts/common.sh@354 -- # echo 1
00:21:49.768    00:52:38	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:49.768     00:52:38	-- scripts/common.sh@365 -- # decimal 2
00:21:49.768     00:52:38	-- scripts/common.sh@352 -- # local d=2
00:21:49.768     00:52:38	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:49.768     00:52:38	-- scripts/common.sh@354 -- # echo 2
00:21:49.768    00:52:38	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:49.768    00:52:38	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:49.768    00:52:38	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:49.768    00:52:38	-- scripts/common.sh@367 -- # return 0
00:21:49.768    00:52:38	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:49.768    00:52:38	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:49.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:49.768  		--rc genhtml_branch_coverage=1
00:21:49.768  		--rc genhtml_function_coverage=1
00:21:49.768  		--rc genhtml_legend=1
00:21:49.768  		--rc geninfo_all_blocks=1
00:21:49.768  		--rc geninfo_unexecuted_blocks=1
00:21:49.768  		
00:21:49.768  		'
00:21:49.768    00:52:38	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:49.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:49.768  		--rc genhtml_branch_coverage=1
00:21:49.768  		--rc genhtml_function_coverage=1
00:21:49.768  		--rc genhtml_legend=1
00:21:49.768  		--rc geninfo_all_blocks=1
00:21:49.768  		--rc geninfo_unexecuted_blocks=1
00:21:49.768  		
00:21:49.768  		'
00:21:49.768    00:52:38	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:49.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:49.768  		--rc genhtml_branch_coverage=1
00:21:49.768  		--rc genhtml_function_coverage=1
00:21:49.768  		--rc genhtml_legend=1
00:21:49.768  		--rc geninfo_all_blocks=1
00:21:49.768  		--rc geninfo_unexecuted_blocks=1
00:21:49.768  		
00:21:49.768  		'
00:21:49.768    00:52:38	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:49.768  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:49.768  		--rc genhtml_branch_coverage=1
00:21:49.768  		--rc genhtml_function_coverage=1
00:21:49.768  		--rc genhtml_legend=1
00:21:49.768  		--rc geninfo_all_blocks=1
00:21:49.768  		--rc geninfo_unexecuted_blocks=1
00:21:49.768  		
00:21:49.768  		'
00:21:49.768   00:52:38	-- scheduler/governor.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh
00:21:49.768    00:52:38	-- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:21:49.768    00:52:38	-- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:21:49.768    00:52:38	-- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:21:49.768    00:52:38	-- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler
00:21:49.768    00:52:38	-- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:21:49.768    00:52:38	-- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh
00:21:49.768     00:52:38	-- scheduler/cgroups.sh@245 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:21:49.768      00:52:38	-- scheduler/cgroups.sh@246 -- # check_cgroup
00:21:49.768      00:52:38	-- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:21:49.768      00:52:38	-- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:21:49.768      00:52:38	-- scheduler/cgroups.sh@10 -- # echo 2
00:21:49.768     00:52:38	-- scheduler/cgroups.sh@246 -- # cgroup_version=2
00:21:49.768   00:52:38	-- scheduler/governor.sh@12 -- # trap 'killprocess "$spdk_pid" || :; restore_cpufreq' EXIT
00:21:49.768   00:52:38	-- scheduler/governor.sh@157 -- # map_cpufreq
00:21:49.768   00:52:38	-- scheduler/common.sh@243 -- # cpufreq_drivers=()
00:21:49.768   00:52:38	-- scheduler/common.sh@243 -- # local -g cpufreq_drivers
00:21:49.768   00:52:38	-- scheduler/common.sh@244 -- # cpufreq_governors=()
00:21:49.768   00:52:38	-- scheduler/common.sh@244 -- # local -g cpufreq_governors
00:21:49.768   00:52:38	-- scheduler/common.sh@245 -- # cpufreq_base_freqs=()
00:21:49.768   00:52:38	-- scheduler/common.sh@245 -- # local -g cpufreq_base_freqs
00:21:49.768   00:52:38	-- scheduler/common.sh@246 -- # cpufreq_max_freqs=()
00:21:49.768   00:52:38	-- scheduler/common.sh@246 -- # local -g cpufreq_max_freqs
00:21:49.768   00:52:38	-- scheduler/common.sh@247 -- # cpufreq_min_freqs=()
00:21:49.768   00:52:38	-- scheduler/common.sh@247 -- # local -g cpufreq_min_freqs
00:21:49.768   00:52:38	-- scheduler/common.sh@248 -- # cpufreq_cur_freqs=()
00:21:49.768   00:52:38	-- scheduler/common.sh@248 -- # local -g cpufreq_cur_freqs
00:21:49.768   00:52:38	-- scheduler/common.sh@249 -- # cpufreq_is_turbo=()
00:21:49.768   00:52:38	-- scheduler/common.sh@249 -- # local -g cpufreq_is_turbo
00:21:49.768   00:52:38	-- scheduler/common.sh@250 -- # cpufreq_available_freqs=()
00:21:49.768   00:52:38	-- scheduler/common.sh@250 -- # local -g cpufreq_available_freqs
00:21:49.768   00:52:38	-- scheduler/common.sh@251 -- # cpufreq_available_governors=()
00:21:49.768   00:52:38	-- scheduler/common.sh@251 -- # local -g cpufreq_available_governors
00:21:49.768   00:52:38	-- scheduler/common.sh@252 -- # cpufreq_high_prio=()
00:21:49.768   00:52:38	-- scheduler/common.sh@252 -- # local -g cpufreq_high_prio
00:21:49.768   00:52:38	-- scheduler/common.sh@253 -- # cpufreq_non_turbo_ratio=()
00:21:49.769   00:52:38	-- scheduler/common.sh@253 -- # local -g cpufreq_non_turbo_ratio
00:21:49.769   00:52:38	-- scheduler/common.sh@254 -- # cpufreq_setspeed=()
00:21:49.769   00:52:38	-- scheduler/common.sh@254 -- # local -g cpufreq_setspeed
00:21:49.769   00:52:38	-- scheduler/common.sh@255 -- # cpuinfo_max_freqs=()
00:21:49.769   00:52:38	-- scheduler/common.sh@255 -- # local -g cpuinfo_max_freqs
00:21:49.769   00:52:38	-- scheduler/common.sh@256 -- # cpuinfo_min_freqs=()
00:21:49.769   00:52:38	-- scheduler/common.sh@256 -- # local -g cpuinfo_min_freqs
00:21:49.769   00:52:38	-- scheduler/common.sh@257 -- # local -g turbo_enabled=0
00:21:49.769   00:52:38	-- scheduler/common.sh@258 -- # local cpu cpu_idx
00:21:49.769   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.769   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=0
00:21:49.769   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq ]]
00:21:49.769   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.769   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.769   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq/base_frequency ]]
00:21:49.769   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.769   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999872
00:21:49.769   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.769   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_0
00:21:49.769   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_0[@]'
00:21:49.769   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.769   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_0
00:21:49.769   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_0[@]'
00:21:49.769   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.769   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.769    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 0 0xce
00:21:49.769   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.769   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.769   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.769   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.769   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.769   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.769   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.769   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.769   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.769   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.769   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=1
00:21:49.769   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq ]]
00:21:49.769   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.769   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.769   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq/base_frequency ]]
00:21:49.769   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.769   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_1
00:21:49.769   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_1[@]'
00:21:49.769   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.769   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_1
00:21:49.769   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_1[@]'
00:21:49.769   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.769   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.769    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 1 0xce
00:21:49.769   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.769   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.769   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.769   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.769   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.769   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.769   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.769   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.769   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.769   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.769   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.769   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.769   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.769   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.770   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=10
00:21:49.770   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq ]]
00:21:49.770   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.770   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.770   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq/base_frequency ]]
00:21:49.770   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.770   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.770   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.770   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.770   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_10
00:21:49.770   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_10[@]'
00:21:49.770   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.770   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_10
00:21:49.770   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_10[@]'
00:21:49.770   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.770   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.770    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 10 0xce
00:21:49.770   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.770   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.770   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.770   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.770   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.770   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.770   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.770   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.770   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.770   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.770   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.770   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.770   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.770   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.771   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=11
00:21:49.771   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq ]]
00:21:49.771   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.771   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.771   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq/base_frequency ]]
00:21:49.771   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.771   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.771   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_11
00:21:49.771   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_11[@]'
00:21:49.771   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.771   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_11
00:21:49.771   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_11[@]'
00:21:49.771   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.771   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.771    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 11 0xce
00:21:49.771   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.771   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.771   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.771   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.771   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.771   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.771   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.771   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.771   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.771   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.771   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=12
00:21:49.771   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq ]]
00:21:49.771   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.771   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.771   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq/base_frequency ]]
00:21:49.771   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.771   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.771   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_12
00:21:49.771   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_12[@]'
00:21:49.771   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.771   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_12
00:21:49.771   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_12[@]'
00:21:49.771   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.771   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.771    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 12 0xce
00:21:49.771   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.771   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.771   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.771   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.771   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.771   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.771   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.771   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.771   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.771   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.771   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.771   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.771   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.771   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.772   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=13
00:21:49.772   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq ]]
00:21:49.772   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.772   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.772   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq/base_frequency ]]
00:21:49.772   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.772   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.772   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.772   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.772   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_13
00:21:49.772   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_13[@]'
00:21:49.772   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.772   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_13
00:21:49.772   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_13[@]'
00:21:49.772   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.772   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.772    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 13 0xce
00:21:49.772   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.772   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.772   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.772   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.772   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.772   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.772   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.772   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.772   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.772   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.772   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.772   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.772   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.772   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.773   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=14
00:21:49.773   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq ]]
00:21:49.773   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.773   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.773   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq/base_frequency ]]
00:21:49.773   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.773   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.773   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_14
00:21:49.773   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_14[@]'
00:21:49.773   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.773   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_14
00:21:49.773   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_14[@]'
00:21:49.773   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.773   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.773    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 14 0xce
00:21:49.773   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.773   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.773   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.773   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.773   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.773   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.773   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.773   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.773   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.773   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.773   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.773   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.773   00:52:38	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.773   00:52:38	-- scheduler/common.sh@261 -- # cpu_idx=15
00:21:49.773   00:52:38	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq ]]
00:21:49.773   00:52:38	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.773   00:52:38	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.773   00:52:38	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq/base_frequency ]]
00:21:49.773   00:52:38	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.773   00:52:38	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.773   00:52:38	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.773   00:52:38	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_15
00:21:49.773   00:52:38	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_15[@]'
00:21:49.773   00:52:38	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.773   00:52:38	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_15
00:21:49.773   00:52:38	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_15[@]'
00:21:49.773   00:52:38	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.773   00:52:38	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.774    00:52:38	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 15 0xce
00:21:49.774   00:52:38	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.774   00:52:38	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.774   00:52:38	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.774   00:52:38	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.774   00:52:38	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.774   00:52:38	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.774   00:52:38	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.774   00:52:38	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.774   00:52:38	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.774   00:52:38	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.774   00:52:38	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:38	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:38	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:38	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.774   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=16
00:21:49.774   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq ]]
00:21:49.774   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.774   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.774   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq/base_frequency ]]
00:21:49.774   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.774   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:49.774   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.774   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.774   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_16
00:21:49.774   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_16[@]'
00:21:49.774   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.774   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_16
00:21:49.774   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_16[@]'
00:21:49.774   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.774   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.774    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 16 0xce
00:21:49.774   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:49.774   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:49.774   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:49.774   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:49.774   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:49.774   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:49.774   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:49.774   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:49.774   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:49.774   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:49.774   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.774   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.774   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:49.774   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.775   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.775   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.775   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.775   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.775   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.775   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.775   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.775   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.775   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:49.775   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:49.775   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:49.775   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:49.775   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=17
00:21:49.775   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq ]]
00:21:49.775   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:49.775   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:49.775   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq/base_frequency ]]
00:21:49.775   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:49.775   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000084
00:21:49.775   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:49.775   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:49.775   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_17
00:21:49.775   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_17[@]'
00:21:49.775   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:49.775   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_17
00:21:49.775   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_17[@]'
00:21:49.775   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:49.775   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:49.775    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 17 0xce
00:21:50.039   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.039   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.039   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.039   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.039   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.039   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.039   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.039   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.039   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.039   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.039   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.039   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=18
00:21:50.039   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq ]]
00:21:50.039   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.039   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.039   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq/base_frequency ]]
00:21:50.039   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.039   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.039   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.039   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.039   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_18
00:21:50.039   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_18[@]'
00:21:50.039   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.039   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_18
00:21:50.039   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_18[@]'
00:21:50.039   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.039   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.039    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 18 0xce
00:21:50.039   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.039   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.039   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.039   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.039   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.039   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.039   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.039   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.039   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.039   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.039   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.039   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.039   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.039   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.040   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=19
00:21:50.040   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq ]]
00:21:50.040   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.040   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.040   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq/base_frequency ]]
00:21:50.040   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.040   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.040   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:50.040   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.040   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_19
00:21:50.040   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_19[@]'
00:21:50.040   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.040   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_19
00:21:50.040   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_19[@]'
00:21:50.040   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.040   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.040    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 19 0xce
00:21:50.040   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.040   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.040   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.040   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.040   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.040   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.040   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.040   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.040   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.040   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.040   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.040   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.040   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.040   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.040   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=2
00:21:50.040   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq ]]
00:21:50.040   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.040   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.040   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq/base_frequency ]]
00:21:50.041   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_2
00:21:50.041   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_2[@]'
00:21:50.041   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.041   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_2
00:21:50.041   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_2[@]'
00:21:50.041   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.041   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.041    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 2 0xce
00:21:50.041   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.041   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.041   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.041   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.041   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.041   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.041   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.041   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.041   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.041   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.041   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=20
00:21:50.041   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq ]]
00:21:50.041   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.041   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.041   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq/base_frequency ]]
00:21:50.041   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1060748
00:21:50.041   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:50.041   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.041   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_20
00:21:50.041   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_20[@]'
00:21:50.041   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.041   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_20
00:21:50.041   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_20[@]'
00:21:50.041   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.041   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.041    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 20 0xce
00:21:50.041   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.041   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.041   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.041   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.041   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.041   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.041   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.041   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.041   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.041   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.041   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.041   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.041   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.042   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=21
00:21:50.042   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq ]]
00:21:50.042   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.042   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.042   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq/base_frequency ]]
00:21:50.042   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.042   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.042   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:50.042   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.042   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_21
00:21:50.042   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_21[@]'
00:21:50.042   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.042   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_21
00:21:50.042   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_21[@]'
00:21:50.042   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.042   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.042    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 21 0xce
00:21:50.042   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.042   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.042   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.042   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.042   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.042   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.042   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.042   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.042   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.042   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.042   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.042   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.042   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.042   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.043   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=22
00:21:50.043   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq ]]
00:21:50.043   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.043   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.043   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq/base_frequency ]]
00:21:50.043   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.043   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:50.043   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_22
00:21:50.043   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_22[@]'
00:21:50.043   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.043   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_22
00:21:50.043   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_22[@]'
00:21:50.043   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.043   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.043    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 22 0xce
00:21:50.043   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.043   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.043   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.043   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.043   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.043   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.043   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.043   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.043   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.043   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.043   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.043   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.043   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.043   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=23
00:21:50.043   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq ]]
00:21:50.043   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.043   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.043   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq/base_frequency ]]
00:21:50.043   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.043   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.043   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.043   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_23
00:21:50.043   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_23[@]'
00:21:50.043   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.043   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_23
00:21:50.043   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_23[@]'
00:21:50.043   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.043   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.043    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 23 0xce
00:21:50.044   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.044   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.044   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.044   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.044   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.044   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.044   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.044   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.044   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.044   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.044   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.044   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=24
00:21:50.044   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq ]]
00:21:50.044   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.044   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.044   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq/base_frequency ]]
00:21:50.044   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.044   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.044   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.044   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.044   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_24
00:21:50.044   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_24[@]'
00:21:50.044   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.044   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_24
00:21:50.044   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_24[@]'
00:21:50.044   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.044   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.044    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 24 0xce
00:21:50.044   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.044   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.044   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.044   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.044   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.044   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.044   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.044   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.044   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.044   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.044   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.044   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.044   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.044   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.045   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=25
00:21:50.045   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq ]]
00:21:50.045   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.045   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.045   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq/base_frequency ]]
00:21:50.045   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.045   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.045   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.045   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.045   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_25
00:21:50.045   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_25[@]'
00:21:50.045   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.045   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_25
00:21:50.045   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_25[@]'
00:21:50.045   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.045   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.045    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 25 0xce
00:21:50.045   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.045   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.045   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.045   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.045   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.045   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.045   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.045   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.045   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.045   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.045   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.045   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.045   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.045   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.045   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=26
00:21:50.045   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq ]]
00:21:50.045   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.045   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.045   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq/base_frequency ]]
00:21:50.045   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.045   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000106
00:21:50.045   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.045   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.045   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_26
00:21:50.045   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_26[@]'
00:21:50.045   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.045   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_26
00:21:50.045   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_26[@]'
00:21:50.045   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.045   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.045    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 26 0xce
00:21:50.045   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.045   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.046   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.046   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.046   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.046   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.046   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.046   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.046   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.046   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.046   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.046   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=27
00:21:50.046   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq ]]
00:21:50.046   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.046   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.046   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq/base_frequency ]]
00:21:50.046   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.046   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.046   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.046   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.046   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_27
00:21:50.046   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_27[@]'
00:21:50.046   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.046   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_27
00:21:50.046   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_27[@]'
00:21:50.046   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.046   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.046    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 27 0xce
00:21:50.046   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.046   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.046   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.046   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.046   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.046   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.046   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.046   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.046   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.046   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.046   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.046   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.046   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.046   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.047   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=28
00:21:50.047   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq ]]
00:21:50.047   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.047   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.047   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq/base_frequency ]]
00:21:50.047   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.047   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.047   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.047   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.047   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_28
00:21:50.047   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_28[@]'
00:21:50.047   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.047   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_28
00:21:50.047   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_28[@]'
00:21:50.047   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.047   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.047    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 28 0xce
00:21:50.047   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.047   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.047   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.047   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.047   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.047   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.047   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.047   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.047   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.047   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.047   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.047   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.047   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.047   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.311   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=29
00:21:50.311   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq ]]
00:21:50.311   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.311   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.311   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq/base_frequency ]]
00:21:50.311   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.311   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.311   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.311   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.311   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_29
00:21:50.311   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_29[@]'
00:21:50.311   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.311   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_29
00:21:50.311   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_29[@]'
00:21:50.311   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.311   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.311    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 29 0xce
00:21:50.311   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.311   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.311   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.311   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.311   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.311   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.311   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.311   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.311   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.311   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.311   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.311   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.311   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.311   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.311   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=3
00:21:50.311   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq ]]
00:21:50.311   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.311   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.311   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq/base_frequency ]]
00:21:50.311   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.311   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.311   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.311   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.312   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_3
00:21:50.312   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_3[@]'
00:21:50.312   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.312   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_3
00:21:50.312   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_3[@]'
00:21:50.312   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.312   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.312    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 3 0xce
00:21:50.312   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.312   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.312   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.312   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.312   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.312   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.312   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.312   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.312   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.312   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.312   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.312   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=30
00:21:50.312   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq ]]
00:21:50.312   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.312   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.312   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq/base_frequency ]]
00:21:50.312   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.312   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.312   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.312   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.312   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_30
00:21:50.312   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_30[@]'
00:21:50.312   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.312   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_30
00:21:50.312   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_30[@]'
00:21:50.312   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.312   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.312    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 30 0xce
00:21:50.312   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.312   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.312   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.312   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.312   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.312   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.312   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.312   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.312   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.312   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.312   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.312   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.312   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.312   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.313   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=31
00:21:50.313   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq ]]
00:21:50.313   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.313   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.313   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq/base_frequency ]]
00:21:50.313   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.313   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000012
00:21:50.313   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.313   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_31
00:21:50.313   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_31[@]'
00:21:50.313   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.313   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_31
00:21:50.313   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_31[@]'
00:21:50.313   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.313   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.313    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 31 0xce
00:21:50.313   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.313   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.313   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.313   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.313   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.313   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.313   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.313   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.313   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.313   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.313   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.313   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.313   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.313   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=32
00:21:50.313   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq ]]
00:21:50.313   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.313   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.313   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq/base_frequency ]]
00:21:50.313   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.313   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.313   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_32
00:21:50.313   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_32[@]'
00:21:50.313   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.313   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_32
00:21:50.313   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_32[@]'
00:21:50.313   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.313   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.313    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 32 0xce
00:21:50.313   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.313   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.313   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.313   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.313   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.314   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.314   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.314   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.314   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.314   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.314   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.314   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=33
00:21:50.314   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq ]]
00:21:50.314   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.314   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.314   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq/base_frequency ]]
00:21:50.314   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.314   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.314   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.314   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.314   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_33
00:21:50.314   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_33[@]'
00:21:50.314   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.314   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_33
00:21:50.314   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_33[@]'
00:21:50.314   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.314   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.314    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 33 0xce
00:21:50.314   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.314   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.314   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.314   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.314   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.314   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.314   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.314   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.314   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.314   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.314   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.314   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.314   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.314   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.315   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=34
00:21:50.315   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq ]]
00:21:50.315   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.315   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.315   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq/base_frequency ]]
00:21:50.315   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.315   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.315   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_34
00:21:50.315   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_34[@]'
00:21:50.315   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.315   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_34
00:21:50.315   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_34[@]'
00:21:50.315   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.315   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.315    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 34 0xce
00:21:50.315   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.315   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.315   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.315   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.315   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.315   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.315   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.315   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.315   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.315   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.315   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.315   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.315   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.315   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=35
00:21:50.315   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq ]]
00:21:50.315   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.315   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.315   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq/base_frequency ]]
00:21:50.315   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.315   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.315   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.315   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_35
00:21:50.315   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_35[@]'
00:21:50.315   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.315   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_35
00:21:50.315   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_35[@]'
00:21:50.315   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.316   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.316    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 35 0xce
00:21:50.316   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.316   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.316   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.316   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.316   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.316   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.316   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.316   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.316   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.316   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.316   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.316   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=36
00:21:50.316   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq ]]
00:21:50.316   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.316   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.316   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq/base_frequency ]]
00:21:50.316   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.316   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.316   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.316   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.316   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_36
00:21:50.316   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_36[@]'
00:21:50.316   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.316   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_36
00:21:50.316   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_36[@]'
00:21:50.316   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.316   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.316    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 36 0xce
00:21:50.316   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.316   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.316   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.316   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.316   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.316   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.316   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.316   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.316   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.316   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.316   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.316   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.316   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.316   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.317   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=37
00:21:50.317   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq ]]
00:21:50.317   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.317   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.317   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq/base_frequency ]]
00:21:50.317   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.317   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.317   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_37
00:21:50.317   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_37[@]'
00:21:50.317   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.317   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_37
00:21:50.317   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_37[@]'
00:21:50.317   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.317   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.317    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 37 0xce
00:21:50.317   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.317   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.317   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.317   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.317   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.317   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.317   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.317   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.317   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.317   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.317   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.317   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=38
00:21:50.317   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq ]]
00:21:50.317   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.317   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.317   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq/base_frequency ]]
00:21:50.317   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.317   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.317   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_38
00:21:50.317   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_38[@]'
00:21:50.317   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.317   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_38
00:21:50.317   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_38[@]'
00:21:50.317   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.317   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.317    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 38 0xce
00:21:50.317   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.317   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.317   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.317   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.317   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.317   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.317   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.317   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.317   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.317   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.317   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.317   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.317   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.318   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=39
00:21:50.318   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq ]]
00:21:50.318   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.318   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.318   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq/base_frequency ]]
00:21:50.318   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.318   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2300219
00:21:50.318   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.318   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.318   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_39
00:21:50.318   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_39[@]'
00:21:50.318   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.318   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_39
00:21:50.318   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_39[@]'
00:21:50.318   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.318   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.318    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 39 0xce
00:21:50.318   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.318   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.318   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.318   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.318   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.318   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.318   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.318   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.318   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.318   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.318   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.318   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.318   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.318   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.319   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.319   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.319   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.319   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.319   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.319   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.319   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=4
00:21:50.319   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq ]]
00:21:50.319   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.319   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.319   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq/base_frequency ]]
00:21:50.319   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.319   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2254321
00:21:50.319   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.319   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.319   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_4
00:21:50.319   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_4[@]'
00:21:50.319   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.319   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_4
00:21:50.319   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_4[@]'
00:21:50.319   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.319   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.319    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 4 0xce
00:21:50.582   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.582   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.582   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.582   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.582   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.582   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.582   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.582   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.582   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.582   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.582   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.582   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.582   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.582   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.582   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=40
00:21:50.582   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq ]]
00:21:50.582   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.582   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.582   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq/base_frequency ]]
00:21:50.582   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.582   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2296569
00:21:50.582   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.582   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.582   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_40
00:21:50.582   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_40[@]'
00:21:50.582   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.582   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_40
00:21:50.582   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_40[@]'
00:21:50.582   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.582   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.582    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 40 0xce
00:21:50.582   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.582   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.583   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.583   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.583   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.583   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.583   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.583   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.583   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.583   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.583   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.583   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=41
00:21:50.583   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq ]]
00:21:50.583   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.583   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.583   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq/base_frequency ]]
00:21:50.583   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.583   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.583   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.583   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.583   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_41
00:21:50.583   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_41[@]'
00:21:50.583   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.583   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_41
00:21:50.583   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_41[@]'
00:21:50.583   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.583   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.583    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 41 0xce
00:21:50.583   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.583   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.583   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.583   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.583   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.583   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.583   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.583   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.583   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.583   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.583   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.583   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.583   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.583   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.584   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=42
00:21:50.584   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq ]]
00:21:50.584   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.584   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.584   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq/base_frequency ]]
00:21:50.584   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.584   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.584   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_42
00:21:50.584   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_42[@]'
00:21:50.584   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.584   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_42
00:21:50.584   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_42[@]'
00:21:50.584   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.584   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.584    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 42 0xce
00:21:50.584   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.584   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.584   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.584   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.584   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.584   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.584   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.584   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.584   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.584   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.584   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.584   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.584   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.584   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=43
00:21:50.584   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq ]]
00:21:50.584   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.584   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.584   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq/base_frequency ]]
00:21:50.584   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.584   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.584   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_43
00:21:50.584   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_43[@]'
00:21:50.584   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.584   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_43
00:21:50.584   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_43[@]'
00:21:50.584   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.584   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.584    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 43 0xce
00:21:50.584   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.584   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.584   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.585   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.585   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.585   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.585   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.585   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.585   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.585   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.585   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.585   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=44
00:21:50.585   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq ]]
00:21:50.585   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.585   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.585   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq/base_frequency ]]
00:21:50.585   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.585   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.585   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.585   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.585   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_44
00:21:50.585   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_44[@]'
00:21:50.585   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.585   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_44
00:21:50.585   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_44[@]'
00:21:50.585   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.585   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.585    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 44 0xce
00:21:50.585   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.585   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.585   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.585   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.585   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.585   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.585   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.585   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.585   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.585   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.585   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.585   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.585   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.585   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.586   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=45
00:21:50.586   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq ]]
00:21:50.586   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.586   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.586   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq/base_frequency ]]
00:21:50.586   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.586   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999921
00:21:50.586   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.586   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.586   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_45
00:21:50.586   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_45[@]'
00:21:50.586   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.586   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_45
00:21:50.586   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_45[@]'
00:21:50.586   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.586   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.586    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 45 0xce
00:21:50.586   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.586   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.586   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.586   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.586   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.586   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.586   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.586   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.586   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.586   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.586   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.586   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.586   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.586   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.586   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=46
00:21:50.586   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq ]]
00:21:50.586   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.586   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.586   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq/base_frequency ]]
00:21:50.586   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.586   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.586   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.586   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.586   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_46
00:21:50.586   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_46[@]'
00:21:50.586   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.586   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_46
00:21:50.586   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_46[@]'
00:21:50.586   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.586   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.586    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 46 0xce
00:21:50.587   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.587   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.587   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.587   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.587   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.587   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.587   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.587   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.587   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.587   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.587   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.587   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=47
00:21:50.587   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq ]]
00:21:50.587   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.587   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.587   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq/base_frequency ]]
00:21:50.587   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.587   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.587   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.587   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.587   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_47
00:21:50.587   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_47[@]'
00:21:50.587   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.587   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_47
00:21:50.587   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_47[@]'
00:21:50.587   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.587   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.587    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 47 0xce
00:21:50.587   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.587   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.587   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.587   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.587   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.587   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.587   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.587   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.587   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.587   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.587   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.587   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.587   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.587   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.588   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=48
00:21:50.588   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq ]]
00:21:50.588   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.588   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.588   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq/base_frequency ]]
00:21:50.588   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.588   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.588   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_48
00:21:50.588   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_48[@]'
00:21:50.588   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.588   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_48
00:21:50.588   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_48[@]'
00:21:50.588   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.588   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.588    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 48 0xce
00:21:50.588   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.588   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.588   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.588   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.588   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.588   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.588   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.588   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.588   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.588   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.588   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.588   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.588   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.588   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=49
00:21:50.588   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq ]]
00:21:50.588   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.588   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.588   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq/base_frequency ]]
00:21:50.588   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.588   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.588   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_49
00:21:50.588   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_49[@]'
00:21:50.588   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.588   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_49
00:21:50.588   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_49[@]'
00:21:50.588   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.588   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.588    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 49 0xce
00:21:50.588   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.588   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.588   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.588   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.588   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.588   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.588   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.588   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.588   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.589   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.589   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.589   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=5
00:21:50.589   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq ]]
00:21:50.589   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.589   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.589   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq/base_frequency ]]
00:21:50.589   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.589   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.589   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.589   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.589   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_5
00:21:50.589   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_5[@]'
00:21:50.589   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.589   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_5
00:21:50.589   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_5[@]'
00:21:50.589   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.589   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.589    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 5 0xce
00:21:50.589   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.589   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.589   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.589   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.589   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.589   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.589   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.589   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.589   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.589   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.589   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.589   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.589   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.589   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.590   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.590   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.590   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.590   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.590   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.590   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=50
00:21:50.590   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq ]]
00:21:50.590   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.590   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.590   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq/base_frequency ]]
00:21:50.590   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.590   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.590   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.590   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.590   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_50
00:21:50.590   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_50[@]'
00:21:50.590   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.590   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_50
00:21:50.590   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_50[@]'
00:21:50.590   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.590   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.590    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 50 0xce
00:21:50.590   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.590   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.590   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.590   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.590   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.590   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.590   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.590   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.590   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.590   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.590   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.590   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.590   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.590   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.590   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.590   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.590   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.854   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.854   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.854   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.854   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=51
00:21:50.854   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq ]]
00:21:50.854   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.854   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.854   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq/base_frequency ]]
00:21:50.854   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.854   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.854   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.854   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.854   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_51
00:21:50.854   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_51[@]'
00:21:50.854   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.854   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_51
00:21:50.854   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_51[@]'
00:21:50.854   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.854   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.854    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 51 0xce
00:21:50.854   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.854   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.854   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.854   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.855   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.855   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.855   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.855   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.855   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.855   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.855   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.855   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=52
00:21:50.855   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq ]]
00:21:50.855   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.855   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.855   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq/base_frequency ]]
00:21:50.855   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.855   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999882
00:21:50.855   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.855   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.855   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_52
00:21:50.855   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_52[@]'
00:21:50.855   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.855   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_52
00:21:50.855   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_52[@]'
00:21:50.855   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.855   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.855    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 52 0xce
00:21:50.855   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.855   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.855   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.855   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.855   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.855   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.855   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.855   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.855   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.855   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.855   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.855   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.855   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.855   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.856   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=53
00:21:50.856   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq ]]
00:21:50.856   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.856   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.856   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq/base_frequency ]]
00:21:50.856   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.856   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.856   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_53
00:21:50.856   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_53[@]'
00:21:50.856   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.856   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_53
00:21:50.856   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_53[@]'
00:21:50.856   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.856   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.856    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 53 0xce
00:21:50.856   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.856   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.856   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.856   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.856   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.856   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.856   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.856   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.856   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.856   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.856   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=54
00:21:50.856   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq ]]
00:21:50.856   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.856   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.856   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq/base_frequency ]]
00:21:50.856   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.856   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000189
00:21:50.856   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.856   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_54
00:21:50.856   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_54[@]'
00:21:50.856   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.856   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_54
00:21:50.856   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_54[@]'
00:21:50.856   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.856   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.856    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 54 0xce
00:21:50.856   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.856   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.856   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.856   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.856   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.856   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.856   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.856   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.856   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.856   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.856   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.856   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.856   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.856   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.857   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=55
00:21:50.857   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq ]]
00:21:50.857   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.857   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.857   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq/base_frequency ]]
00:21:50.857   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.857   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.857   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.857   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.857   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_55
00:21:50.857   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_55[@]'
00:21:50.857   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.857   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_55
00:21:50.857   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_55[@]'
00:21:50.857   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.857   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.857    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 55 0xce
00:21:50.857   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.857   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.857   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.857   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.857   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.857   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.857   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.857   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.857   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.857   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.857   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.857   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.857   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.857   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.858   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=56
00:21:50.858   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq ]]
00:21:50.858   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.858   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.858   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq/base_frequency ]]
00:21:50.858   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.858   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.858   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_56
00:21:50.858   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_56[@]'
00:21:50.858   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.858   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_56
00:21:50.858   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_56[@]'
00:21:50.858   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.858   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.858    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 56 0xce
00:21:50.858   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.858   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.858   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.858   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.858   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.858   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.858   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.858   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.858   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.858   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.858   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=57
00:21:50.858   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq ]]
00:21:50.858   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.858   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.858   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq/base_frequency ]]
00:21:50.858   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.858   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.858   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_57
00:21:50.858   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_57[@]'
00:21:50.858   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.858   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_57
00:21:50.858   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_57[@]'
00:21:50.858   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.858   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.858    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 57 0xce
00:21:50.858   00:52:39	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.858   00:52:39	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.858   00:52:39	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.858   00:52:39	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.858   00:52:39	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.858   00:52:39	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.858   00:52:39	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.858   00:52:39	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.858   00:52:39	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.858   00:52:39	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.858   00:52:39	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.858   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.858   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.858   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:39	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:39	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:39	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.859   00:52:39	-- scheduler/common.sh@261 -- # cpu_idx=58
00:21:50.859   00:52:39	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq ]]
00:21:50.859   00:52:39	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.859   00:52:39	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.859   00:52:39	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq/base_frequency ]]
00:21:50.859   00:52:39	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.859   00:52:39	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.859   00:52:39	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.859   00:52:39	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.859   00:52:39	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_58
00:21:50.859   00:52:39	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_58[@]'
00:21:50.859   00:52:39	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.859   00:52:39	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_58
00:21:50.859   00:52:39	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_58[@]'
00:21:50.859   00:52:39	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.859   00:52:39	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.859    00:52:39	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 58 0xce
00:21:50.859   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.859   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.859   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.859   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.859   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.859   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.859   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.859   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.859   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.859   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.859   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.859   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.859   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.859   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.860   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=59
00:21:50.860   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq ]]
00:21:50.860   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.860   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.860   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq/base_frequency ]]
00:21:50.860   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.860   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000316
00:21:50.860   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.860   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_59
00:21:50.860   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_59[@]'
00:21:50.860   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.860   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_59
00:21:50.860   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_59[@]'
00:21:50.860   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.860   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.860    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 59 0xce
00:21:50.860   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.860   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.860   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.860   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.860   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.860   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.860   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.860   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.860   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.860   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.860   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=6
00:21:50.860   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq ]]
00:21:50.860   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.860   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.860   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq/base_frequency ]]
00:21:50.860   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.860   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.860   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_6
00:21:50.860   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_6[@]'
00:21:50.860   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.860   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_6
00:21:50.860   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_6[@]'
00:21:50.860   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.860   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.860    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 6 0xce
00:21:50.860   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.860   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.860   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.860   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.860   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.860   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.860   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.860   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.860   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.860   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.860   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.860   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.860   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.860   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.861   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=60
00:21:50.861   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq ]]
00:21:50.861   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.861   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.861   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq/base_frequency ]]
00:21:50.861   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.861   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:50.861   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.861   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.861   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_60
00:21:50.861   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_60[@]'
00:21:50.861   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.861   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_60
00:21:50.861   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_60[@]'
00:21:50.861   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.861   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.861    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 60 0xce
00:21:50.861   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.861   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.861   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.861   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.861   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.861   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.861   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.861   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.861   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.861   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.861   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.861   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.861   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.861   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:50.861   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=61
00:21:50.861   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq ]]
00:21:50.861   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:50.861   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:50.861   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq/base_frequency ]]
00:21:50.861   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:50.861   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000095
00:21:50.861   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:50.861   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:50.862   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_61
00:21:50.862   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_61[@]'
00:21:50.862   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:50.862   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_61
00:21:50.862   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_61[@]'
00:21:50.862   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:50.862   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:50.862    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 61 0xce
00:21:50.862   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:50.862   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:50.862   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:50.862   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:50.862   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:50.862   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:50.862   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:50.862   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:50.862   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:50.862   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:50.862   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:50.862   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:50.862   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:50.862   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.126   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=62
00:21:51.126   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq ]]
00:21:51.126   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.126   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.126   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq/base_frequency ]]
00:21:51.126   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.126   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.126   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_62
00:21:51.126   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_62[@]'
00:21:51.126   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.126   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_62
00:21:51.126   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_62[@]'
00:21:51.126   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.126   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.126    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 62 0xce
00:21:51.126   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.126   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.126   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.126   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.126   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.126   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.126   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.126   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.126   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.126   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.126   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.126   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.126   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.126   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=63
00:21:51.126   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq ]]
00:21:51.126   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.126   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.126   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq/base_frequency ]]
00:21:51.126   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.126   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.126   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.126   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_63
00:21:51.127   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_63[@]'
00:21:51.127   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.127   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_63
00:21:51.127   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_63[@]'
00:21:51.127   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.127   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.127    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 63 0xce
00:21:51.127   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.127   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.127   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.127   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.127   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.127   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.127   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.127   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.127   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.127   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.127   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.127   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=64
00:21:51.127   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq ]]
00:21:51.127   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.127   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.127   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq/base_frequency ]]
00:21:51.127   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.127   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.127   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.127   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.127   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_64
00:21:51.127   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_64[@]'
00:21:51.127   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.127   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_64
00:21:51.127   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_64[@]'
00:21:51.127   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.127   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.127    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 64 0xce
00:21:51.127   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.127   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.127   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.127   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.127   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.127   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.127   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.127   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.127   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.127   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.127   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.127   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.127   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.127   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.128   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=65
00:21:51.128   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq ]]
00:21:51.128   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.128   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.128   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq/base_frequency ]]
00:21:51.128   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.128   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.128   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_65
00:21:51.128   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_65[@]'
00:21:51.128   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.128   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_65
00:21:51.128   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_65[@]'
00:21:51.128   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.128   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.128    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 65 0xce
00:21:51.128   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.128   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.128   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.128   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.128   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.128   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.128   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.128   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.128   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.128   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.128   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.128   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.128   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.128   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=66
00:21:51.128   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq ]]
00:21:51.128   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.128   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.128   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq/base_frequency ]]
00:21:51.128   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.128   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.128   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.128   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_66
00:21:51.128   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_66[@]'
00:21:51.128   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.128   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_66
00:21:51.128   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_66[@]'
00:21:51.128   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.129   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.129    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 66 0xce
00:21:51.129   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.129   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.129   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.129   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.129   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.129   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.129   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.129   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.129   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.129   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.129   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.129   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=67
00:21:51.129   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq ]]
00:21:51.129   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.129   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.129   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq/base_frequency ]]
00:21:51.129   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.129   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.129   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.129   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.129   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_67
00:21:51.129   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_67[@]'
00:21:51.129   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.129   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_67
00:21:51.129   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_67[@]'
00:21:51.129   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.129   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.129    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 67 0xce
00:21:51.129   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.129   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.129   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.129   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.129   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.129   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.129   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.129   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.129   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.129   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.129   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.129   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.129   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.129   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.130   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=68
00:21:51.130   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq ]]
00:21:51.130   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.130   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.130   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq/base_frequency ]]
00:21:51.130   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.130   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.130   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.130   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.130   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_68
00:21:51.130   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_68[@]'
00:21:51.130   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.130   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_68
00:21:51.130   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_68[@]'
00:21:51.130   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.130   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.130    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 68 0xce
00:21:51.130   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.130   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.130   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.130   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.130   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.130   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.130   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.130   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.130   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.130   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.130   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.130   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.130   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.130   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.130   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=69
00:21:51.130   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq ]]
00:21:51.130   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.130   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.130   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq/base_frequency ]]
00:21:51.131   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.131   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.131   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_69
00:21:51.131   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_69[@]'
00:21:51.131   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.131   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_69
00:21:51.131   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_69[@]'
00:21:51.131   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.131   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.131    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 69 0xce
00:21:51.131   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.131   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.131   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.131   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.131   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.131   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.131   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.131   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.131   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.131   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.131   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=7
00:21:51.131   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq ]]
00:21:51.131   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.131   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.131   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq/base_frequency ]]
00:21:51.131   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.131   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.131   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_7
00:21:51.131   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_7[@]'
00:21:51.131   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.131   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_7
00:21:51.131   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_7[@]'
00:21:51.131   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.131   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.131    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 7 0xce
00:21:51.131   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.131   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.131   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.131   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.131   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.131   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.131   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.131   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.131   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.131   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.131   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.131   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.131   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.131   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.132   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=70
00:21:51.132   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq ]]
00:21:51.132   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.132   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.132   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq/base_frequency ]]
00:21:51.132   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.132   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.132   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.132   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.132   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_70
00:21:51.132   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_70[@]'
00:21:51.132   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.132   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_70
00:21:51.132   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_70[@]'
00:21:51.132   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.132   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.132    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 70 0xce
00:21:51.132   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.132   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.132   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.132   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.132   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.132   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.132   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.132   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.132   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.132   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.132   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.132   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.132   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.132   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.132   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=71
00:21:51.132   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq ]]
00:21:51.132   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.132   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.132   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq/base_frequency ]]
00:21:51.132   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.132   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999998
00:21:51.132   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.132   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.132   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_71
00:21:51.132   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_71[@]'
00:21:51.132   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.132   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_71
00:21:51.133   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_71[@]'
00:21:51.133   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.133   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.133    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 71 0xce
00:21:51.133   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.133   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.133   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.133   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.133   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.133   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.133   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.133   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.133   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.133   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.133   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.133   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=8
00:21:51.133   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq ]]
00:21:51.133   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.133   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.133   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq/base_frequency ]]
00:21:51.133   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.133   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.133   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.133   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.133   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_8
00:21:51.133   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_8[@]'
00:21:51.133   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.133   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_8
00:21:51.133   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_8[@]'
00:21:51.133   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.133   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.133    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 8 0xce
00:21:51.133   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.133   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.133   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.133   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.133   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.133   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.133   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.133   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.133   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.133   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.133   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.133   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.133   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.133   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.134   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.134   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.134   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.134   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.134   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.134   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.134   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.134   00:52:40	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:51.134   00:52:40	-- scheduler/common.sh@261 -- # cpu_idx=9
00:21:51.134   00:52:40	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq ]]
00:21:51.134   00:52:40	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:51.134   00:52:40	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:51.134   00:52:40	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq/base_frequency ]]
00:21:51.134   00:52:40	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:51.134   00:52:40	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:51.134   00:52:40	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:51.134   00:52:40	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:51.134   00:52:40	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_9
00:21:51.134   00:52:40	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_9[@]'
00:21:51.134   00:52:40	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:51.134   00:52:40	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_9
00:21:51.134   00:52:40	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_9[@]'
00:21:51.134   00:52:40	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:51.134   00:52:40	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:51.134    00:52:40	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 9 0xce
00:21:51.394   00:52:40	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:51.394   00:52:40	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:51.394   00:52:40	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:51.394   00:52:40	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:51.394   00:52:40	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:51.394   00:52:40	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:51.394   00:52:40	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:51.394   00:52:40	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:51.394   00:52:40	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:51.394   00:52:40	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:51.394   00:52:40	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:51.394   00:52:40	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:51.394   00:52:40	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:51.394   00:52:40	-- scheduler/common.sh@359 -- # [[ -e /sys/devices/system/cpu/cpufreq/boost ]]
00:21:51.394   00:52:40	-- scheduler/common.sh@361 -- # [[ -e /sys/devices/system/cpu/intel_pstate/no_turbo ]]
00:21:51.394   00:52:40	-- scheduler/common.sh@362 -- # turbo_enabled=1
00:21:51.394   00:52:40	-- scheduler/governor.sh@159 -- # initial_main_core_governor=powersave
00:21:51.394   00:52:40	-- scheduler/governor.sh@161 -- # verify_dpdk_governor
00:21:51.394   00:52:40	-- scheduler/governor.sh@60 -- # xtrace_disable
00:21:51.394   00:52:40	-- common/autotest_common.sh@10 -- # set +x
00:21:51.394  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:51.394  [2024-12-17 00:52:40.541818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:51.394  [2024-12-17 00:52:40.541898] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075009 ]
00:21:51.394  EAL: No free 2048 kB hugepages reported on node 1
00:21:51.394  [2024-12-17 00:52:40.655208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8
00:21:51.653  [2024-12-17 00:52:40.736506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:21:51.653  [2024-12-17 00:52:40.736775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:21:51.653  [2024-12-17 00:52:40.736818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:21:51.653  [2024-12-17 00:52:40.736918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:21:51.653  [2024-12-17 00:52:40.736938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37
00:21:51.653  [2024-12-17 00:52:40.737057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39
00:21:51.653  [2024-12-17 00:52:40.737021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38
00:21:51.653  [2024-12-17 00:52:40.737083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40
00:21:51.653  [2024-12-17 00:52:40.737088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:21:52.591  POWER: Env isn't set yet!
00:21:52.591  POWER: Attempting to initialise ACPI cpufreq power management...
00:21:52.591  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:21:52.591  POWER: Cannot set governor of lcore 1 to userspace
00:21:52.591  POWER: Attempting to initialise PSTAT power management...
00:21:52.591  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 1 power management
00:21:52.591  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 2 power management
00:21:52.591  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 3 power management
00:21:52.591  POWER: Power management governor of lcore 4 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 4 power management
00:21:52.591  POWER: Power management governor of lcore 37 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 37 power management
00:21:52.591  POWER: Power management governor of lcore 38 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 38 power management
00:21:52.591  POWER: Power management governor of lcore 39 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 39 power management
00:21:52.591  POWER: Power management governor of lcore 40 has been set to 'performance' successfully
00:21:52.591  POWER: Initialized successfully for lcore 40 power management
00:21:52.591  [2024-12-17 00:52:41.556131] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:21:52.591  [2024-12-17 00:52:41.556156] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:21:52.591  [2024-12-17 00:52:41.556172] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:21:52.851  [2024-12-17 00:52:41.966468] 'OCF_Core' volume operations registered
00:21:52.851  [2024-12-17 00:52:41.968815] 'OCF_Cache' volume operations registered
00:21:52.851  [2024-12-17 00:52:41.971626] 'OCF Composite' volume operations registered
00:21:52.851  [2024-12-17 00:52:41.974012] 'SPDK_block_device' volume operations registered
00:21:53.791  Waiting for samples...
00:21:54.359  MAIN DPDK cpu1 current frequency at 2199998 KHz (1000000-2300001 KHz), set frequency 2100000 KHz < 2200000 KHz
00:21:55.296  MAIN DPDK cpu1 current frequency at 2100007 KHz (1000000-2300001 KHz), set frequency 2000000 KHz < 2100000 KHz
00:21:56.673  MAIN DPDK cpu1 current frequency at 2000001 KHz (1000000-2300001 KHz), set frequency 2000000 KHz < 2000000 KHz
00:21:57.241  MAIN DPDK cpu1 current frequency at 1999999 KHz (1000000-2300001 KHz), set frequency 1800000 KHz < 2000000 KHz
00:21:58.618  MAIN DPDK cpu1 current frequency at 1800000 KHz (1000000-2300001 KHz), set frequency 1800000 KHz < 1800000 KHz
00:21:59.186  MAIN DPDK cpu1 current frequency at 1800000 KHz (1000000-2300001 KHz), set frequency 1600000 KHz < 1800000 KHz
00:22:00.564  MAIN DPDK cpu1 current frequency at 1600001 KHz (1000000-2300001 KHz), set frequency 1600000 KHz < 1600000 KHz
00:22:01.132  MAIN DPDK cpu1 current frequency at 1599996 KHz (1000000-2300001 KHz), set frequency 1400000 KHz < 1600000 KHz
00:22:02.508  MAIN DPDK cpu1 current frequency at 1399997 KHz (1000000-2300001 KHz), set frequency 1400000 KHz < 1400000 KHz
00:22:03.444  MAIN DPDK cpu1 current frequency at 1400003 KHz (1000000-2300001 KHz), set frequency 1200000 KHz < 1400000 KHz
00:22:04.380  MAIN DPDK cpu1 current frequency at 1200001 KHz (1000000-2300001 KHz), set frequency 1200000 KHz < 1200000 KHz
00:22:05.318  MAIN DPDK cpu1 current frequency at 1199997 KHz (1000000-2300001 KHz), set frequency 1000000 KHz < 1200000 KHz
00:22:05.318  Main cpu1 frequency dropped by 84%
00:22:05.318   00:52:54	-- scheduler/governor.sh@1 -- # killprocess 1075009
00:22:05.318   00:52:54	-- common/autotest_common.sh@936 -- # '[' -z 1075009 ']'
00:22:05.318   00:52:54	-- common/autotest_common.sh@940 -- # kill -0 1075009
00:22:05.318    00:52:54	-- common/autotest_common.sh@941 -- # uname
00:22:05.318   00:52:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:05.318    00:52:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1075009
00:22:05.318   00:52:54	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:05.318   00:52:54	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:05.318   00:52:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1075009'
00:22:05.318  killing process with pid 1075009
00:22:05.318   00:52:54	-- common/autotest_common.sh@955 -- # kill 1075009
00:22:05.318   00:52:54	-- common/autotest_common.sh@960 -- # wait 1075009
00:22:05.577  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 4 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 37 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 38 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 39 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original
00:22:05.577  POWER: Power management governor of lcore 40 has been set to 'powersave' successfully
00:22:05.577  POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original
00:22:06.148   00:52:55	-- scheduler/governor.sh@1 -- # restore_cpufreq
00:22:06.148   00:52:55	-- scheduler/governor.sh@15 -- # local cpu
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 1 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=1
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.148   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.148   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.148   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 1 powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@395 -- # local cpu=1
00:22:06.148   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 0 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=0
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.148   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.148   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.148   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 0 powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@395 -- # local cpu=0
00:22:06.148   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 2 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=2
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.148   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.148   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.148   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 2 powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@395 -- # local cpu=2
00:22:06.148   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 3 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=3
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.148   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.148   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.148   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 3 powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@395 -- # local cpu=3
00:22:06.148   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 4 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=4
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.148   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.148   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.148   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 4 powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@395 -- # local cpu=4
00:22:06.148   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 5 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=5
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.148   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.148   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.148   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 5 powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@395 -- # local cpu=5
00:22:06.148   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.148   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.148   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.148   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 6 1000000 2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@367 -- # local cpu=6
00:22:06.148   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.148   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.148   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq
00:22:06.148   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.148   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 6 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=6
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 7 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=7
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 7 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=7
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 8 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=8
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 8 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=8
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 9 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=9
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 9 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=9
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 10 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=10
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 10 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=10
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 11 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=11
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 11 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=11
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 12 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=12
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 12 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=12
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 13 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=13
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.149   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.149   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.149   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 13 powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@395 -- # local cpu=13
00:22:06.149   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.149   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.149   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.149   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 14 1000000 2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@367 -- # local cpu=14
00:22:06.149   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.149   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.149   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq
00:22:06.149   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.149   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 14 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=14
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 15 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=15
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 15 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=15
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 16 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=16
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 16 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=16
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 17 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=17
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 17 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=17
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 36 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=36
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 36 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=36
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=37
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=37
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=38
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=38
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=39
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.150   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.150   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.150   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@395 -- # local cpu=39
00:22:06.150   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.150   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.150   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.150   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@367 -- # local cpu=40
00:22:06.150   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.150   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.150   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:22:06.150   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.150   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=40
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 41 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=41
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 41 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=41
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 42 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=42
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 42 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=42
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 43 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=43
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 43 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=43
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 44 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=44
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 44 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=44
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 45 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=45
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 45 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=45
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 46 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=46
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 46 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=46
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 47 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=47
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.151   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.151   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.151   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 47 powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@395 -- # local cpu=47
00:22:06.151   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.151   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.151   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.151   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 48 1000000 2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@367 -- # local cpu=48
00:22:06.151   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.151   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.151   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq
00:22:06.151   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.151   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 48 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=48
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 49 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=49
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 49 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=49
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 50 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=50
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 50 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=50
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 51 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=51
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 51 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=51
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 52 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=52
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 52 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=52
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 53 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=53
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 53 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=53
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 18 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=18
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.152   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.152   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.152   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.152   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 18 powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@395 -- # local cpu=18
00:22:06.152   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.152   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.152   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.152   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@367 -- # local cpu=37
00:22:06.152   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.152   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.152   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:22:06.152   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=37
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=38
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=38
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=39
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=39
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=40
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=40
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 23 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=23
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 23 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=23
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 24 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=24
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 24 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=24
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 25 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=25
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 25 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=25
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 26 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=26
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.153   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.153   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.153   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.153   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.153   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 26 powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@395 -- # local cpu=26
00:22:06.153   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.153   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq
00:22:06.153   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.153   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.153   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 27 1000000 2300001
00:22:06.153   00:52:55	-- scheduler/common.sh@367 -- # local cpu=27
00:22:06.153   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 27 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=27
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 28 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=28
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 28 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=28
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 29 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=29
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 29 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=29
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 30 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=30
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 30 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=30
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 31 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=31
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 31 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=31
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 32 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=32
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 32 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=32
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 33 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=33
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 33 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=33
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 34 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=34
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.154   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.154   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.154   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.154   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.154   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 34 powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@395 -- # local cpu=34
00:22:06.154   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.154   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq
00:22:06.154   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.154   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.154   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 35 1000000 2300001
00:22:06.154   00:52:55	-- scheduler/common.sh@367 -- # local cpu=35
00:22:06.154   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 35 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=35
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 54 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=54
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 54 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=54
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 55 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=55
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 55 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=55
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 56 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=56
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 56 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=56
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 57 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=57
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 57 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=57
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 58 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=58
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 58 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=58
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 59 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=59
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 59 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=59
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 60 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=60
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.155   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.155   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.155   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.155   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.155   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 60 powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@395 -- # local cpu=60
00:22:06.155   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.155   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq
00:22:06.155   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.155   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.155   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 61 1000000 2300001
00:22:06.155   00:52:55	-- scheduler/common.sh@367 -- # local cpu=61
00:22:06.155   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 61 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=61
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 62 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=62
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 62 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=62
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 63 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=63
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 63 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=63
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 64 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=64
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 64 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=64
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 65 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=65
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 65 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=65
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 66 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=66
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 66 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=66
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 67 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=67
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 67 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=67
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 68 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=68
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.156   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.156   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.156   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.156   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 68 powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@395 -- # local cpu=68
00:22:06.156   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.156   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq
00:22:06.156   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.156   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.156   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 69 1000000 2300001
00:22:06.156   00:52:55	-- scheduler/common.sh@367 -- # local cpu=69
00:22:06.156   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.156   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq
00:22:06.157   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.157   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.157   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.157   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.157   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 69 powersave
00:22:06.157   00:52:55	-- scheduler/common.sh@395 -- # local cpu=69
00:22:06.157   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.157   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq
00:22:06.157   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.157   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.157   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 70 1000000 2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@367 -- # local cpu=70
00:22:06.157   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.157   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq
00:22:06.157   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.157   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.157   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.157   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.157   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 70 powersave
00:22:06.157   00:52:55	-- scheduler/common.sh@395 -- # local cpu=70
00:22:06.157   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.157   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq
00:22:06.157   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.157   00:52:55	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:22:06.157   00:52:55	-- scheduler/governor.sh@18 -- # set_cpufreq 71 1000000 2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@367 -- # local cpu=71
00:22:06.157   00:52:55	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:22:06.157   00:52:55	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq
00:22:06.157   00:52:55	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:22:06.157   00:52:55	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:22:06.157   00:52:55	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:22:06.157   00:52:55	-- scheduler/common.sh@385 -- # echo 2300001
00:22:06.157   00:52:55	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:22:06.157   00:52:55	-- scheduler/common.sh@388 -- # echo 1000000
00:22:06.157   00:52:55	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 71 powersave
00:22:06.157   00:52:55	-- scheduler/common.sh@395 -- # local cpu=71
00:22:06.157   00:52:55	-- scheduler/common.sh@396 -- # local governor=powersave
00:22:06.157   00:52:55	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq
00:22:06.157   00:52:55	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:22:06.157  
00:22:06.157  real	0m16.610s
00:22:06.157  user	0m27.309s
00:22:06.157  sys	0m5.108s
00:22:06.157   00:52:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:06.157   00:52:55	-- common/autotest_common.sh@10 -- # set +x
00:22:06.157  ************************************
00:22:06.157  END TEST dpdk_governor
00:22:06.157  ************************************
00:22:06.157   00:52:55	-- scheduler/scheduler.sh@17 -- # run_test interrupt_mode /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh
00:22:06.157   00:52:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:22:06.157   00:52:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:22:06.157   00:52:55	-- common/autotest_common.sh@10 -- # set +x
00:22:06.157  ************************************
00:22:06.157  START TEST interrupt_mode
00:22:06.157  ************************************
00:22:06.157   00:52:55	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh
00:22:06.157  * Looking for test storage...
00:22:06.157  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:22:06.157    00:52:55	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:22:06.157     00:52:55	-- common/autotest_common.sh@1690 -- # lcov --version
00:22:06.157     00:52:55	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:22:06.416    00:52:55	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:22:06.416    00:52:55	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:22:06.416    00:52:55	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:22:06.416    00:52:55	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:22:06.416    00:52:55	-- scripts/common.sh@335 -- # IFS=.-:
00:22:06.416    00:52:55	-- scripts/common.sh@335 -- # read -ra ver1
00:22:06.416    00:52:55	-- scripts/common.sh@336 -- # IFS=.-:
00:22:06.416    00:52:55	-- scripts/common.sh@336 -- # read -ra ver2
00:22:06.416    00:52:55	-- scripts/common.sh@337 -- # local 'op=<'
00:22:06.416    00:52:55	-- scripts/common.sh@339 -- # ver1_l=2
00:22:06.416    00:52:55	-- scripts/common.sh@340 -- # ver2_l=1
00:22:06.416    00:52:55	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:22:06.416    00:52:55	-- scripts/common.sh@343 -- # case "$op" in
00:22:06.416    00:52:55	-- scripts/common.sh@344 -- # : 1
00:22:06.416    00:52:55	-- scripts/common.sh@363 -- # (( v = 0 ))
00:22:06.416    00:52:55	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:06.416     00:52:55	-- scripts/common.sh@364 -- # decimal 1
00:22:06.416     00:52:55	-- scripts/common.sh@352 -- # local d=1
00:22:06.416     00:52:55	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:06.416     00:52:55	-- scripts/common.sh@354 -- # echo 1
00:22:06.416    00:52:55	-- scripts/common.sh@364 -- # ver1[v]=1
00:22:06.416     00:52:55	-- scripts/common.sh@365 -- # decimal 2
00:22:06.416     00:52:55	-- scripts/common.sh@352 -- # local d=2
00:22:06.416     00:52:55	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:06.416     00:52:55	-- scripts/common.sh@354 -- # echo 2
00:22:06.416    00:52:55	-- scripts/common.sh@365 -- # ver2[v]=2
00:22:06.416    00:52:55	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:22:06.416    00:52:55	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:22:06.416    00:52:55	-- scripts/common.sh@367 -- # return 0
00:22:06.416    00:52:55	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:06.416    00:52:55	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:22:06.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:06.416  		--rc genhtml_branch_coverage=1
00:22:06.416  		--rc genhtml_function_coverage=1
00:22:06.416  		--rc genhtml_legend=1
00:22:06.416  		--rc geninfo_all_blocks=1
00:22:06.416  		--rc geninfo_unexecuted_blocks=1
00:22:06.416  		
00:22:06.416  		'
00:22:06.416    00:52:55	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:22:06.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:06.416  		--rc genhtml_branch_coverage=1
00:22:06.416  		--rc genhtml_function_coverage=1
00:22:06.416  		--rc genhtml_legend=1
00:22:06.416  		--rc geninfo_all_blocks=1
00:22:06.416  		--rc geninfo_unexecuted_blocks=1
00:22:06.416  		
00:22:06.416  		'
00:22:06.416    00:52:55	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:22:06.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:06.416  		--rc genhtml_branch_coverage=1
00:22:06.416  		--rc genhtml_function_coverage=1
00:22:06.416  		--rc genhtml_legend=1
00:22:06.416  		--rc geninfo_all_blocks=1
00:22:06.416  		--rc geninfo_unexecuted_blocks=1
00:22:06.416  		
00:22:06.416  		'
00:22:06.416    00:52:55	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:22:06.416  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:06.416  		--rc genhtml_branch_coverage=1
00:22:06.416  		--rc genhtml_function_coverage=1
00:22:06.416  		--rc genhtml_legend=1
00:22:06.416  		--rc geninfo_all_blocks=1
00:22:06.416  		--rc geninfo_unexecuted_blocks=1
00:22:06.416  		
00:22:06.416  		'
00:22:06.416   00:52:55	-- scheduler/interrupt.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh
00:22:06.416    00:52:55	-- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:22:06.416    00:52:55	-- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:22:06.416    00:52:55	-- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:22:06.416    00:52:55	-- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler
00:22:06.416    00:52:55	-- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:22:06.416    00:52:55	-- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh
00:22:06.417     00:52:55	-- scheduler/cgroups.sh@245 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:22:06.417      00:52:55	-- scheduler/cgroups.sh@246 -- # check_cgroup
00:22:06.417      00:52:55	-- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:22:06.417      00:52:55	-- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:22:06.417      00:52:55	-- scheduler/cgroups.sh@10 -- # echo 2
00:22:06.417     00:52:55	-- scheduler/cgroups.sh@246 -- # cgroup_version=2
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@12 -- # trap 'killprocess "$spdk_pid"' EXIT
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@14 -- # cpus=()
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@14 -- # declare -a cpus
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@15 -- # cpus_to_collect=()
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@15 -- # declare -a cpus_to_collect
00:22:06.417    00:52:55	-- scheduler/interrupt.sh@17 -- # parse_cpu_list /dev/fd/62
00:22:06.417     00:52:55	-- scheduler/interrupt.sh@17 -- # echo 1,2,3,4,37,38,39,40
00:22:06.417    00:52:55	-- scheduler/common.sh@34 -- # local list=/dev/fd/62
00:22:06.417    00:52:55	-- scheduler/common.sh@35 -- # local elem elems cpus
00:22:06.417    00:52:55	-- scheduler/common.sh@38 -- # IFS=,
00:22:06.417    00:52:55	-- scheduler/common.sh@38 -- # read -ra elems
00:22:06.417    00:52:55	-- scheduler/common.sh@40 -- # (( 8 > 0 ))
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 1 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=1
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 2 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=2
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 3 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=3
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 4 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=4
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 37 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=37
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 38 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=38
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 39 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=39
00:22:06.417    00:52:55	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:22:06.417    00:52:55	-- scheduler/common.sh@43 -- # [[ 40 == *-* ]]
00:22:06.417    00:52:55	-- scheduler/common.sh@49 -- # cpus[elem]=40
00:22:06.417    00:52:55	-- scheduler/common.sh@52 -- # printf '%u\n' 1 2 3 4 37 38 39 40
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@17 -- # fold_list_onto_array cpus 1 2 3 4 37 38 39 40
00:22:06.417   00:52:55	-- scheduler/common.sh@16 -- # local array=cpus
00:22:06.417   00:52:55	-- scheduler/common.sh@17 -- # local elem
00:22:06.417   00:52:55	-- scheduler/common.sh@19 -- # shift
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=1'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=1
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=2'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=2
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=3'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=3
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=4'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=4
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=37'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=37
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=38'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=38
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=39'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=39
00:22:06.417   00:52:55	-- scheduler/common.sh@21 -- # for elem in "$@"
00:22:06.417   00:52:55	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=40'
00:22:06.417    00:52:55	-- scheduler/common.sh@22 -- # cpus[elem]=40
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@19 -- # cpus=("${cpus[@]}")
00:22:06.417   00:52:55	-- scheduler/interrupt.sh@78 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1
00:22:06.417   00:52:55	-- scheduler/common.sh@405 -- # [[ -e /proc//status ]]
00:22:06.417   00:52:55	-- scheduler/common.sh@409 -- # spdk_pid=1078916
00:22:06.417   00:52:55	-- scheduler/common.sh@408 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc
00:22:06.417   00:52:55	-- scheduler/common.sh@411 -- # waitforlisten 1078916
00:22:06.417   00:52:55	-- common/autotest_common.sh@829 -- # '[' -z 1078916 ']'
00:22:06.417   00:52:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:06.417   00:52:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:22:06.417   00:52:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:06.417  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:06.417   00:52:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:22:06.417   00:52:55	-- common/autotest_common.sh@10 -- # set +x
00:22:06.417  [2024-12-17 00:52:55.518956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:22:06.417  [2024-12-17 00:52:55.519028] [ DPDK EAL parameters: scheduler --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078916 ]
00:22:06.417  EAL: No free 2048 kB hugepages reported on node 1
00:22:06.417  [2024-12-17 00:52:55.634558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8
00:22:06.676  [2024-12-17 00:52:55.716840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:22:06.676  [2024-12-17 00:52:55.716924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:22:06.676  [2024-12-17 00:52:55.717027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:22:06.676  [2024-12-17 00:52:55.718904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37
00:22:06.676  [2024-12-17 00:52:55.718939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38
00:22:06.676  [2024-12-17 00:52:55.722927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39
00:22:06.676  [2024-12-17 00:52:55.722956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40
00:22:06.676  [2024-12-17 00:52:55.722961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:22:06.676   00:52:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:22:06.676   00:52:55	-- common/autotest_common.sh@862 -- # return 0
00:22:06.676   00:52:55	-- scheduler/common.sh@412 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic
00:22:07.245  POWER: Env isn't set yet!
00:22:07.245  POWER: Attempting to initialise ACPI cpufreq power management...
00:22:07.245  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:22:07.245  POWER: Cannot set governor of lcore 1 to userspace
00:22:07.245  POWER: Attempting to initialise PSTAT power management...
00:22:07.245  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:22:07.245  POWER: Initialized successfully for lcore 1 power management
00:22:07.504  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 2 power management
00:22:07.504  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 3 power management
00:22:07.504  POWER: Power management governor of lcore 4 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 4 power management
00:22:07.504  POWER: Power management governor of lcore 37 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 37 power management
00:22:07.504  POWER: Power management governor of lcore 38 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 38 power management
00:22:07.504  POWER: Power management governor of lcore 39 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 39 power management
00:22:07.504  POWER: Power management governor of lcore 40 has been set to 'performance' successfully
00:22:07.504  POWER: Initialized successfully for lcore 40 power management
00:22:07.504  [2024-12-17 00:52:56.576131] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:22:07.504  [2024-12-17 00:52:56.576156] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:22:07.504  [2024-12-17 00:52:56.576172] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:22:07.504   00:52:56	-- scheduler/common.sh@413 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:22:07.764  [2024-12-17 00:52:56.987400] 'OCF_Core' volume operations registered
00:22:07.764  [2024-12-17 00:52:56.989813] 'OCF_Cache' volume operations registered
00:22:07.764  [2024-12-17 00:52:56.992672] 'OCF Composite' volume operations registered
00:22:07.764  [2024-12-17 00:52:56.995098] 'SPDK_block_device' volume operations registered
00:22:07.764  [2024-12-17 00:52:56.996134] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:22:07.764   00:52:57	-- scheduler/interrupt.sh@80 -- # interrupt
00:22:07.764   00:52:57	-- scheduler/interrupt.sh@22 -- # local busy_cpus
00:22:07.764   00:52:57	-- scheduler/interrupt.sh@23 -- # local cpu thread
00:22:07.764   00:52:57	-- scheduler/interrupt.sh@25 -- # local reactor_framework
00:22:07.764   00:52:57	-- scheduler/interrupt.sh@27 -- # cpus_to_collect=("${cpus[@]}")
00:22:07.764   00:52:57	-- scheduler/interrupt.sh@28 -- # collect_cpu_idle
00:22:07.764   00:52:57	-- scheduler/common.sh@626 -- # (( 8 > 0 ))
00:22:07.764   00:52:57	-- scheduler/common.sh@628 -- # local time=5
00:22:07.764   00:52:57	-- scheduler/common.sh@629 -- # local cpu
00:22:07.764   00:52:57	-- scheduler/common.sh@630 -- # local samples
00:22:07.764   00:52:57	-- scheduler/common.sh@631 -- # is_idle=()
00:22:07.764   00:52:57	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:07.764   00:52:57	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' '1 2 3 4 37 38 39 40' 5
00:22:07.764  Collecting cpu idle stats (cpus: 1 2 3 4 37 38 39 40) for 5 seconds...
00:22:07.764   00:52:57	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 1 2 3 4 37 38 39 40
00:22:07.764   00:52:57	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:07.764   00:52:57	-- common/autotest_common.sh@10 -- # set +x
00:22:14.335   00:53:03	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:14.335   00:53:03	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:14.335   00:53:03	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:14.335    00:53:03	-- scheduler/common.sh@641 -- # calc_median 0 0 0 0 0
00:22:14.335    00:53:03	-- scheduler/common.sh@727 -- # samples=('0' '0' '0' '0' '0')
00:22:14.335    00:53:03	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:14.335    00:53:03	-- scheduler/common.sh@728 -- # local middle median sample
00:22:14.335    00:53:03	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:14.335     00:53:03	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 0 0 0
00:22:14.335     00:53:03	-- scheduler/common.sh@730 -- # sort -n
00:22:14.335    00:53:03	-- scheduler/common.sh@732 -- # middle=2
00:22:14.335    00:53:03	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:14.335    00:53:03	-- scheduler/common.sh@736 -- # median=0
00:22:14.335    00:53:03	-- scheduler/common.sh@739 -- # echo 0
00:22:14.335   00:53:03	-- scheduler/common.sh@641 -- # load_median=0
00:22:14.335   00:53:03	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 1 '0 0 0 0 0' 0 0
00:22:14.335  * cpu1 idle samples: 0 0 0 0 0 (avg: 0%, median: 0%)
00:22:14.335    00:53:03	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 1 user
00:22:14.335    00:53:03	-- scheduler/common.sh@678 -- # local cpu=1 time=user
00:22:14.335    00:53:03	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:14.335    00:53:03	-- scheduler/common.sh@682 -- # [[ -v raw_samples_1 ]]
00:22:14.335    00:53:03	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_1
00:22:14.335    00:53:03	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:14.335    00:53:03	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:14.335    00:53:03	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:14.335    00:53:03	-- scheduler/common.sh@690 -- # case "$time" in
00:22:14.335    00:53:03	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:14.335     00:53:03	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:14.335    00:53:03	-- scheduler/common.sh@697 -- # usage=101
00:22:14.335    00:53:03	-- scheduler/common.sh@698 -- # usage=100
00:22:14.335    00:53:03	-- scheduler/common.sh@700 -- # printf %u 100
00:22:14.335    00:53:03	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 1 user 100
00:22:14.335  * cpu1 user usage: 100
00:22:14.335    00:53:03	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 1 '133036 133137 133238 133339 133440'
00:22:14.335  * cpu1 user samples: 133036 133137 133238 133339 133440
00:22:14.335    00:53:03	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 1 '1671 1671 1671 1671 1671'
00:22:14.335  * cpu1 nice samples: 1671 1671 1671 1671 1671
00:22:14.335    00:53:03	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 1 '6804 6804 6805 6805 6805'
00:22:14.335  * cpu1 system samples: 6804 6804 6805 6805 6805
00:22:14.335   00:53:03	-- scheduler/common.sh@652 -- # user_load=100
00:22:14.335   00:53:03	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:14.335   00:53:03	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:22:14.335   00:53:03	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 1
00:22:14.335  * cpu1 is not idle
00:22:14.335   00:53:03	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:22:14.335    00:53:03	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 1
00:22:14.335    00:53:03	-- scheduler/common.sh@747 -- # xtrace_disable
00:22:14.335    00:53:03	-- common/autotest_common.sh@10 -- # set +x
00:22:18.525  stime samples: 0 0 0 0
00:22:18.525  utime samples: 0 99 100 100
00:22:18.525   00:53:07	-- scheduler/common.sh@666 -- # user_spdk_load=99
00:22:18.525   00:53:07	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:22:18.525   00:53:07	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:18.525   00:53:07	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:18.525    00:53:07	-- scheduler/common.sh@641 -- # calc_median 100 100 97 58 55
00:22:18.525    00:53:07	-- scheduler/common.sh@727 -- # samples=('100' '100' '97' '58' '55')
00:22:18.525    00:53:07	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:18.525    00:53:07	-- scheduler/common.sh@728 -- # local middle median sample
00:22:18.525    00:53:07	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:18.525     00:53:07	-- scheduler/common.sh@730 -- # printf '%s\n' 100 100 97 58 55
00:22:18.525     00:53:07	-- scheduler/common.sh@730 -- # sort -n
00:22:18.525    00:53:07	-- scheduler/common.sh@732 -- # middle=2
00:22:18.526    00:53:07	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:18.526    00:53:07	-- scheduler/common.sh@736 -- # median=97
00:22:18.526    00:53:07	-- scheduler/common.sh@739 -- # echo 97
00:22:18.526   00:53:07	-- scheduler/common.sh@641 -- # load_median=97
00:22:18.526   00:53:07	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '100 100 97 58 55' 82 97
00:22:18.526  * cpu2 idle samples: 100 100 97 58 55 (avg: 82%, median: 97%)
00:22:18.526    00:53:07	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 2 user
00:22:18.526    00:53:07	-- scheduler/common.sh@678 -- # local cpu=2 time=user
00:22:18.526    00:53:07	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:18.526    00:53:07	-- scheduler/common.sh@682 -- # [[ -v raw_samples_2 ]]
00:22:18.526    00:53:07	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_2
00:22:18.526    00:53:07	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:18.526    00:53:07	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:18.526    00:53:07	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:18.526    00:53:07	-- scheduler/common.sh@690 -- # case "$time" in
00:22:18.526    00:53:07	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:18.526     00:53:07	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:18.526    00:53:07	-- scheduler/common.sh@697 -- # usage=44
00:22:18.526    00:53:07	-- scheduler/common.sh@698 -- # usage=44
00:22:18.526    00:53:07	-- scheduler/common.sh@700 -- # printf %u 44
00:22:18.526    00:53:07	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 2 user 44
00:22:18.526  * cpu2 user usage: 44
00:22:18.526    00:53:07	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 2 '71288 71288 71289 71330 71374'
00:22:18.526  * cpu2 user samples: 71288 71288 71289 71330 71374
00:22:18.526    00:53:07	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0'
00:22:18.526  * cpu2 nice samples: 0 0 0 0 0
00:22:18.526    00:53:07	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 2 '5365 5365 5367 5368 5369'
00:22:18.526  * cpu2 system samples: 5365 5365 5367 5368 5369
00:22:18.526   00:53:07	-- scheduler/common.sh@652 -- # user_load=44
00:22:18.526   00:53:07	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:18.526   00:53:07	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:22:18.526   00:53:07	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 2
00:22:18.526  * cpu2 is not idle
00:22:18.526   00:53:07	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:22:18.526    00:53:07	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 2
00:22:18.526    00:53:07	-- scheduler/common.sh@747 -- # xtrace_disable
00:22:18.526    00:53:07	-- common/autotest_common.sh@10 -- # set +x
00:22:22.719  stime samples: 0 0 0 0
00:22:22.719  utime samples: 0 0 0 0
00:22:22.719   00:53:11	-- scheduler/common.sh@666 -- # user_spdk_load=0
00:22:22.719   00:53:11	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:22:22.719   00:53:11	-- scheduler/common.sh@668 -- # printf '* SPDK thread pinned to cpu%u seems to be idle regardless (%u%%)\n' 2 0
00:22:22.719  * SPDK thread pinned to cpu2 seems to be idle regardless (0%)
00:22:22.719   00:53:11	-- scheduler/common.sh@671 -- # is_idle[cpu]=1
00:22:22.719   00:53:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:22.719   00:53:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:22.719    00:53:11	-- scheduler/common.sh@641 -- # calc_median 100 100 100 99 100
00:22:22.719    00:53:11	-- scheduler/common.sh@727 -- # samples=('100' '100' '100' '99' '100')
00:22:22.719    00:53:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:22.719    00:53:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:22.719    00:53:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:22.719     00:53:11	-- scheduler/common.sh@730 -- # printf '%s\n' 100 100 100 99 100
00:22:22.719     00:53:11	-- scheduler/common.sh@730 -- # sort -n
00:22:22.719    00:53:11	-- scheduler/common.sh@732 -- # middle=2
00:22:22.719    00:53:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:22.719    00:53:11	-- scheduler/common.sh@736 -- # median=100
00:22:22.719    00:53:11	-- scheduler/common.sh@739 -- # echo 100
00:22:22.719   00:53:11	-- scheduler/common.sh@641 -- # load_median=100
00:22:22.719   00:53:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '100 100 100 99 100' 99 100
00:22:22.719  * cpu3 idle samples: 100 100 100 99 100 (avg: 99%, median: 100%)
00:22:22.719    00:53:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 3 user
00:22:22.719    00:53:11	-- scheduler/common.sh@678 -- # local cpu=3 time=user
00:22:22.719    00:53:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:22.719    00:53:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_3 ]]
00:22:22.719    00:53:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_3
00:22:22.719    00:53:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:22.719    00:53:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:22.719    00:53:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:22.719    00:53:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:22.719    00:53:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:22.719     00:53:11	-- scheduler/common.sh@691 -- # trap - ERR
00:22:22.719     00:53:11	-- scheduler/common.sh@691 -- # print_backtrace
00:22:22.719     00:53:11	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:22.719     00:53:11	-- common/autotest_common.sh@1142 -- # return 0
00:22:22.719     00:53:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:22.719    00:53:11	-- scheduler/common.sh@697 -- # usage=0
00:22:22.719    00:53:11	-- scheduler/common.sh@698 -- # usage=0
00:22:22.719    00:53:11	-- scheduler/common.sh@700 -- # printf %u 0
00:22:22.719    00:53:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 3 user 0
00:22:22.719  * cpu3 user usage: 0
00:22:22.719    00:53:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 3 '62080 62080 62080 62080 62080'
00:22:22.719  * cpu3 user samples: 62080 62080 62080 62080 62080
00:22:22.719    00:53:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 3 '6 6 6 6 6'
00:22:22.719  * cpu3 nice samples: 6 6 6 6 6
00:22:22.719    00:53:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 3 '4956 4956 4956 4957 4957'
00:22:22.719  * cpu3 system samples: 4956 4956 4956 4957 4957
00:22:22.719   00:53:11	-- scheduler/common.sh@652 -- # user_load=0
00:22:22.719   00:53:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:22.719   00:53:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 3
00:22:22.719  * cpu3 is idle
00:22:22.719   00:53:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:22.719   00:53:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:22.719   00:53:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:22.719    00:53:11	-- scheduler/common.sh@641 -- # calc_median 99 100 100 100 100
00:22:22.719    00:53:11	-- scheduler/common.sh@727 -- # samples=('99' '100' '100' '100' '100')
00:22:22.719    00:53:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:22.719    00:53:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:22.719    00:53:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:22.719     00:53:11	-- scheduler/common.sh@730 -- # printf '%s\n' 99 100 100 100 100
00:22:22.719     00:53:11	-- scheduler/common.sh@730 -- # sort -n
00:22:22.719    00:53:11	-- scheduler/common.sh@732 -- # middle=2
00:22:22.719    00:53:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:22.719    00:53:11	-- scheduler/common.sh@736 -- # median=100
00:22:22.719    00:53:11	-- scheduler/common.sh@739 -- # echo 100
00:22:22.719   00:53:11	-- scheduler/common.sh@641 -- # load_median=100
00:22:22.719   00:53:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '99 100 100 100 100' 99 100
00:22:22.719  * cpu4 idle samples: 99 100 100 100 100 (avg: 99%, median: 100%)
00:22:22.719    00:53:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 4 user
00:22:22.719    00:53:11	-- scheduler/common.sh@678 -- # local cpu=4 time=user
00:22:22.719    00:53:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:22.719    00:53:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_4 ]]
00:22:22.719    00:53:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_4
00:22:22.719    00:53:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:22.719    00:53:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:22.719    00:53:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:22.719    00:53:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:22.719    00:53:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:22.719     00:53:11	-- scheduler/common.sh@691 -- # trap - ERR
00:22:22.719     00:53:11	-- scheduler/common.sh@691 -- # print_backtrace
00:22:22.719     00:53:11	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:22.719     00:53:11	-- common/autotest_common.sh@1142 -- # return 0
00:22:22.719     00:53:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:22.719    00:53:11	-- scheduler/common.sh@697 -- # usage=0
00:22:22.719    00:53:11	-- scheduler/common.sh@698 -- # usage=0
00:22:22.719    00:53:11	-- scheduler/common.sh@700 -- # printf %u 0
00:22:22.719    00:53:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 4 user 0
00:22:22.719  * cpu4 user usage: 0
00:22:22.719    00:53:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 4 '25620 25620 25620 25620 25620'
00:22:22.719  * cpu4 user samples: 25620 25620 25620 25620 25620
00:22:22.719    00:53:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0'
00:22:22.719  * cpu4 nice samples: 0 0 0 0 0
00:22:22.719    00:53:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 4 '4793 4793 4793 4793 4793'
00:22:22.719  * cpu4 system samples: 4793 4793 4793 4793 4793
00:22:22.719   00:53:11	-- scheduler/common.sh@652 -- # user_load=0
00:22:22.719   00:53:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:22.719   00:53:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 4
00:22:22.719  * cpu4 is idle
00:22:22.719   00:53:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:22.719   00:53:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:22.719   00:53:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:22.719    00:53:11	-- scheduler/common.sh@641 -- # calc_median 100 99 99 98 100
00:22:22.719    00:53:11	-- scheduler/common.sh@727 -- # samples=('100' '99' '99' '98' '100')
00:22:22.719    00:53:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:22.719    00:53:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:22.719    00:53:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:22.719     00:53:11	-- scheduler/common.sh@730 -- # printf '%s\n' 100 99 99 98 100
00:22:22.719     00:53:11	-- scheduler/common.sh@730 -- # sort -n
00:22:22.719    00:53:11	-- scheduler/common.sh@732 -- # middle=2
00:22:22.719    00:53:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:22.719    00:53:11	-- scheduler/common.sh@736 -- # median=99
00:22:22.719    00:53:11	-- scheduler/common.sh@739 -- # echo 99
00:22:22.720   00:53:11	-- scheduler/common.sh@641 -- # load_median=99
00:22:22.720   00:53:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 37 '100 99 99 98 100' 99 99
00:22:22.720  * cpu37 idle samples: 100 99 99 98 100 (avg: 99%, median: 99%)
00:22:22.720    00:53:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 37 user
00:22:22.720    00:53:11	-- scheduler/common.sh@678 -- # local cpu=37 time=user
00:22:22.720    00:53:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:22.720    00:53:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_37 ]]
00:22:22.720    00:53:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_37
00:22:22.720    00:53:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:22.720    00:53:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:22.720     00:53:11	-- scheduler/common.sh@691 -- # trap - ERR
00:22:22.720     00:53:11	-- scheduler/common.sh@691 -- # print_backtrace
00:22:22.720     00:53:11	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:22.720     00:53:11	-- common/autotest_common.sh@1142 -- # return 0
00:22:22.720     00:53:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:22.720    00:53:11	-- scheduler/common.sh@697 -- # usage=0
00:22:22.720    00:53:11	-- scheduler/common.sh@698 -- # usage=0
00:22:22.720    00:53:11	-- scheduler/common.sh@700 -- # printf %u 0
00:22:22.720    00:53:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 37 user 0
00:22:22.720  * cpu37 user usage: 0
00:22:22.720    00:53:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 37 '12724 12725 12726 12727 12727'
00:22:22.720  * cpu37 user samples: 12724 12725 12726 12727 12727
00:22:22.720    00:53:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 37 '34 34 34 34 34'
00:22:22.720  * cpu37 nice samples: 34 34 34 34 34
00:22:22.720    00:53:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 37 '2073 2073 2073 2074 2074'
00:22:22.720  * cpu37 system samples: 2073 2073 2073 2074 2074
00:22:22.720   00:53:11	-- scheduler/common.sh@652 -- # user_load=0
00:22:22.720   00:53:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:22.720   00:53:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 37
00:22:22.720  * cpu37 is idle
00:22:22.720   00:53:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:22.720   00:53:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:22.720   00:53:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:22.720    00:53:11	-- scheduler/common.sh@641 -- # calc_median 100 100 100 100 100
00:22:22.720    00:53:11	-- scheduler/common.sh@727 -- # samples=('100' '100' '100' '100' '100')
00:22:22.720    00:53:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:22.720    00:53:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:22.720    00:53:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:22.720     00:53:11	-- scheduler/common.sh@730 -- # printf '%s\n' 100 100 100 100 100
00:22:22.720     00:53:11	-- scheduler/common.sh@730 -- # sort -n
00:22:22.720    00:53:11	-- scheduler/common.sh@732 -- # middle=2
00:22:22.720    00:53:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:22.720    00:53:11	-- scheduler/common.sh@736 -- # median=100
00:22:22.720    00:53:11	-- scheduler/common.sh@739 -- # echo 100
00:22:22.720   00:53:11	-- scheduler/common.sh@641 -- # load_median=100
00:22:22.720   00:53:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 38 '100 100 100 100 100' 100 100
00:22:22.720  * cpu38 idle samples: 100 100 100 100 100 (avg: 100%, median: 100%)
00:22:22.720    00:53:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 38 user
00:22:22.720    00:53:11	-- scheduler/common.sh@678 -- # local cpu=38 time=user
00:22:22.720    00:53:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:22.720    00:53:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_38 ]]
00:22:22.720    00:53:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_38
00:22:22.720    00:53:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:22.720    00:53:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:22.720     00:53:11	-- scheduler/common.sh@691 -- # trap - ERR
00:22:22.720     00:53:11	-- scheduler/common.sh@691 -- # print_backtrace
00:22:22.720     00:53:11	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:22.720     00:53:11	-- common/autotest_common.sh@1142 -- # return 0
00:22:22.720     00:53:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:22.720    00:53:11	-- scheduler/common.sh@697 -- # usage=0
00:22:22.720    00:53:11	-- scheduler/common.sh@698 -- # usage=0
00:22:22.720    00:53:11	-- scheduler/common.sh@700 -- # printf %u 0
00:22:22.720    00:53:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 38 user 0
00:22:22.720  * cpu38 user usage: 0
00:22:22.720    00:53:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 38 '14474 14474 14474 14474 14474'
00:22:22.720  * cpu38 user samples: 14474 14474 14474 14474 14474
00:22:22.720    00:53:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 38 '38 38 38 38 38'
00:22:22.720  * cpu38 nice samples: 38 38 38 38 38
00:22:22.720    00:53:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 38 '2819 2819 2819 2819 2819'
00:22:22.720  * cpu38 system samples: 2819 2819 2819 2819 2819
00:22:22.720   00:53:11	-- scheduler/common.sh@652 -- # user_load=0
00:22:22.720   00:53:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:22.720   00:53:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 38
00:22:22.720  * cpu38 is idle
00:22:22.720   00:53:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:22.720   00:53:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:22.720   00:53:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:22.720    00:53:11	-- scheduler/common.sh@641 -- # calc_median 100 100 100 100 99
00:22:22.720    00:53:11	-- scheduler/common.sh@727 -- # samples=('100' '100' '100' '100' '99')
00:22:22.720    00:53:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:22.720    00:53:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:22.720    00:53:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:22.720     00:53:11	-- scheduler/common.sh@730 -- # printf '%s\n' 100 100 100 100 99
00:22:22.720     00:53:11	-- scheduler/common.sh@730 -- # sort -n
00:22:22.720    00:53:11	-- scheduler/common.sh@732 -- # middle=2
00:22:22.720    00:53:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:22.720    00:53:11	-- scheduler/common.sh@736 -- # median=100
00:22:22.720    00:53:11	-- scheduler/common.sh@739 -- # echo 100
00:22:22.720   00:53:11	-- scheduler/common.sh@641 -- # load_median=100
00:22:22.720   00:53:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 39 '100 100 100 100 99' 99 100
00:22:22.720  * cpu39 idle samples: 100 100 100 100 99 (avg: 99%, median: 100%)
00:22:22.720    00:53:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 39 user
00:22:22.720    00:53:11	-- scheduler/common.sh@678 -- # local cpu=39 time=user
00:22:22.720    00:53:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:22.720    00:53:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_39 ]]
00:22:22.720    00:53:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_39
00:22:22.720    00:53:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:22.720    00:53:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:22.720     00:53:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:22.720    00:53:11	-- scheduler/common.sh@697 -- # usage=1
00:22:22.720    00:53:11	-- scheduler/common.sh@698 -- # usage=1
00:22:22.720    00:53:11	-- scheduler/common.sh@700 -- # printf %u 1
00:22:22.720    00:53:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 39 user 1
00:22:22.720  * cpu39 user usage: 1
00:22:22.720    00:53:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 39 '14626 14626 14626 14626 14627'
00:22:22.720  * cpu39 user samples: 14626 14626 14626 14626 14627
00:22:22.720    00:53:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 39 '0 0 0 0 0'
00:22:22.720  * cpu39 nice samples: 0 0 0 0 0
00:22:22.720    00:53:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 39 '2943 2943 2943 2943 2943'
00:22:22.720  * cpu39 system samples: 2943 2943 2943 2943 2943
00:22:22.720   00:53:11	-- scheduler/common.sh@652 -- # user_load=1
00:22:22.720   00:53:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:22.720   00:53:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 39
00:22:22.720  * cpu39 is idle
00:22:22.720   00:53:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:22.720   00:53:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:22.720   00:53:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:22.720    00:53:11	-- scheduler/common.sh@641 -- # calc_median 99 99 100 100 100
00:22:22.720    00:53:11	-- scheduler/common.sh@727 -- # samples=('99' '99' '100' '100' '100')
00:22:22.720    00:53:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:22.720    00:53:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:22.720    00:53:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:22.720     00:53:11	-- scheduler/common.sh@730 -- # printf '%s\n' 99 99 100 100 100
00:22:22.720     00:53:11	-- scheduler/common.sh@730 -- # sort -n
00:22:22.720    00:53:11	-- scheduler/common.sh@732 -- # middle=2
00:22:22.720    00:53:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:22.720    00:53:11	-- scheduler/common.sh@736 -- # median=100
00:22:22.720    00:53:11	-- scheduler/common.sh@739 -- # echo 100
00:22:22.720   00:53:11	-- scheduler/common.sh@641 -- # load_median=100
00:22:22.720   00:53:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 40 '99 99 100 100 100' 99 100
00:22:22.720  * cpu40 idle samples: 99 99 100 100 100 (avg: 99%, median: 100%)
00:22:22.720    00:53:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 40 user
00:22:22.720    00:53:11	-- scheduler/common.sh@678 -- # local cpu=40 time=user
00:22:22.720    00:53:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:22.720    00:53:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_40 ]]
00:22:22.720    00:53:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_40
00:22:22.720    00:53:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:22.720    00:53:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:22.720    00:53:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:22.721     00:53:11	-- scheduler/common.sh@691 -- # trap - ERR
00:22:22.721     00:53:11	-- scheduler/common.sh@691 -- # print_backtrace
00:22:22.721     00:53:11	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:22.721     00:53:11	-- common/autotest_common.sh@1142 -- # return 0
00:22:22.721     00:53:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:22.721    00:53:11	-- scheduler/common.sh@697 -- # usage=0
00:22:22.721    00:53:11	-- scheduler/common.sh@698 -- # usage=0
00:22:22.721    00:53:11	-- scheduler/common.sh@700 -- # printf %u 0
00:22:22.721    00:53:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 40 user 0
00:22:22.721  * cpu40 user usage: 0
00:22:22.721    00:53:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 40 '15167 15167 15167 15167 15167'
00:22:22.721  * cpu40 user samples: 15167 15167 15167 15167 15167
00:22:22.721    00:53:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 40 '0 0 0 0 0'
00:22:22.721  * cpu40 nice samples: 0 0 0 0 0
00:22:22.721    00:53:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 40 '3288 3288 3288 3288 3288'
00:22:22.721  * cpu40 system samples: 3288 3288 3288 3288 3288
00:22:22.721   00:53:11	-- scheduler/common.sh@652 -- # user_load=0
00:22:22.721   00:53:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:22.721   00:53:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 40
00:22:22.721  * cpu40 is idle
00:22:22.721   00:53:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@31 -- # rpc_cmd framework_get_reactors
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@31 -- # jq -r '.reactors[]'
00:22:22.721    00:53:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:22.721    00:53:11	-- common/autotest_common.sh@10 -- # set +x
00:22:22.721    00:53:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@31 -- # reactor_framework='{
00:22:22.721    "lcore": 1,
00:22:22.721    "busy": 683801266,
00:22:22.721    "idle": 34975883442,
00:22:22.721    "in_interrupt": false,
00:22:22.721    "core_freq": 1000,
00:22:22.721    "lw_threads": [
00:22:22.721      {
00:22:22.721        "name": "app_thread",
00:22:22.721        "id": 1,
00:22:22.721        "cpumask": "2",
00:22:22.721        "elapsed": 35697684336
00:22:22.721      }
00:22:22.721    ]
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 2,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1972418806,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 3,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1972578452,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 4,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1972961296,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 37,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1969024806,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 38,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1969312040,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 39,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1960580738,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }
00:22:22.721  {
00:22:22.721    "lcore": 40,
00:22:22.721    "busy": 0,
00:22:22.721    "idle": 1960681164,
00:22:22.721    "in_interrupt": true,
00:22:22.721    "core_freq": 2300,
00:22:22.721    "lw_threads": []
00:22:22.721  }'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 37) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 38) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 39) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 40) | .lw_threads[].id'
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@41 -- # (( is_idle[cpu] == 0 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@49 -- # busy_cpus=("${cpus[@]:1:3}")
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@49 -- # threads=()
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}"
00:22:22.721     00:53:11	-- scheduler/interrupt.sh@54 -- # mask_cpus 2
00:22:22.721      00:53:11	-- scheduler/common.sh@166 -- # fold_array_onto_string 2
00:22:22.721      00:53:11	-- scheduler/common.sh@27 -- # cpus=('2')
00:22:22.721      00:53:11	-- scheduler/common.sh@27 -- # local cpus
00:22:22.721      00:53:11	-- scheduler/common.sh@29 -- # local IFS=,
00:22:22.721      00:53:11	-- scheduler/common.sh@30 -- # echo 2
00:22:22.721     00:53:11	-- scheduler/common.sh@166 -- # printf '[%s]\n' 2
00:22:22.721    00:53:11	-- scheduler/interrupt.sh@54 -- # create_thread -n thread2 -m '[2]' -a 100
00:22:22.721    00:53:11	-- scheduler/common.sh@471 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread2 -m '[2]' -a 100
00:22:22.721    00:53:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:22.721    00:53:11	-- common/autotest_common.sh@10 -- # set +x
00:22:22.721    00:53:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@54 -- # threads[cpu]=2
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu")
00:22:22.721   00:53:11	-- scheduler/interrupt.sh@55 -- # collect_cpu_idle
00:22:22.721   00:53:11	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:22.721   00:53:11	-- scheduler/common.sh@628 -- # local time=5
00:22:22.721   00:53:11	-- scheduler/common.sh@629 -- # local cpu
00:22:22.721   00:53:11	-- scheduler/common.sh@630 -- # local samples
00:22:22.721   00:53:11	-- scheduler/common.sh@631 -- # is_idle=()
00:22:22.721   00:53:11	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:22.721   00:53:11	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5
00:22:22.721  Collecting cpu idle stats (cpus: 2) for 5 seconds...
00:22:22.721   00:53:11	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 2
00:22:22.721   00:53:11	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:22.721   00:53:11	-- common/autotest_common.sh@10 -- # set +x
00:22:29.295   00:53:17	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:29.295   00:53:17	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:29.295   00:53:17	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:29.295    00:53:17	-- scheduler/common.sh@641 -- # calc_median 100 18 0 0 0
00:22:29.295    00:53:17	-- scheduler/common.sh@727 -- # samples=('100' '18' '0' '0' '0')
00:22:29.295    00:53:17	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:29.295    00:53:17	-- scheduler/common.sh@728 -- # local middle median sample
00:22:29.295    00:53:17	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:29.295     00:53:17	-- scheduler/common.sh@730 -- # printf '%s\n' 100 18 0 0 0
00:22:29.295     00:53:17	-- scheduler/common.sh@730 -- # sort -n
00:22:29.295    00:53:17	-- scheduler/common.sh@732 -- # middle=2
00:22:29.295    00:53:17	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:29.295    00:53:17	-- scheduler/common.sh@736 -- # median=0
00:22:29.295    00:53:17	-- scheduler/common.sh@739 -- # echo 0
00:22:29.295   00:53:17	-- scheduler/common.sh@641 -- # load_median=0
00:22:29.295   00:53:17	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '100 18 0 0 0' 23 0
00:22:29.295  * cpu2 idle samples: 100 18 0 0 0 (avg: 23%, median: 0%)
00:22:29.295    00:53:17	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 2 user
00:22:29.295    00:53:17	-- scheduler/common.sh@678 -- # local cpu=2 time=user
00:22:29.295    00:53:17	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:29.295    00:53:17	-- scheduler/common.sh@682 -- # [[ -v raw_samples_2 ]]
00:22:29.295    00:53:17	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_2
00:22:29.295    00:53:17	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:29.295    00:53:17	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:29.295    00:53:17	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:29.295    00:53:17	-- scheduler/common.sh@690 -- # case "$time" in
00:22:29.295    00:53:17	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:29.295     00:53:17	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:29.295    00:53:17	-- scheduler/common.sh@697 -- # usage=101
00:22:29.295    00:53:17	-- scheduler/common.sh@698 -- # usage=100
00:22:29.295    00:53:17	-- scheduler/common.sh@700 -- # printf %u 100
00:22:29.295    00:53:17	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 2 user 100
00:22:29.295  * cpu2 user usage: 100
00:22:29.295    00:53:17	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 2 '71388 71470 71570 71671 71772'
00:22:29.295  * cpu2 user samples: 71388 71470 71570 71671 71772
00:22:29.295    00:53:17	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0'
00:22:29.295  * cpu2 nice samples: 0 0 0 0 0
00:22:29.295    00:53:17	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 2 '5374 5374 5374 5374 5374'
00:22:29.295  * cpu2 system samples: 5374 5374 5374 5374 5374
00:22:29.295   00:53:17	-- scheduler/common.sh@652 -- # user_load=100
00:22:29.295   00:53:17	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:29.295   00:53:17	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:22:29.295   00:53:17	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 2
00:22:29.295  * cpu2 is not idle
00:22:29.295   00:53:17	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:22:29.295    00:53:17	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 2
00:22:29.295    00:53:17	-- scheduler/common.sh@747 -- # xtrace_disable
00:22:29.295    00:53:17	-- common/autotest_common.sh@10 -- # set +x
00:22:32.583  stime samples: 0 0 0 0
00:22:32.583  utime samples: 0 100 100 100
00:22:32.583   00:53:21	-- scheduler/common.sh@666 -- # user_spdk_load=100
00:22:32.583   00:53:21	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:22:32.583    00:53:21	-- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors
00:22:32.583    00:53:21	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:32.583    00:53:21	-- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]'
00:22:32.583    00:53:21	-- common/autotest_common.sh@10 -- # set +x
00:22:32.583    00:53:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:32.583   00:53:21	-- scheduler/interrupt.sh@56 -- # reactor_framework='{
00:22:32.583    "lcore": 1,
00:22:32.583    "busy": 3928929936,
00:22:32.583    "idle": 55535955548,
00:22:32.583    "in_interrupt": false,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": [
00:22:32.583      {
00:22:32.583        "name": "app_thread",
00:22:32.583        "id": 1,
00:22:32.583        "cpumask": "2",
00:22:32.583        "elapsed": 59502898540
00:22:32.583      }
00:22:32.583    ]
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 2,
00:22:32.583    "busy": 19783478654,
00:22:32.583    "idle": 2662467514,
00:22:32.583    "in_interrupt": false,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": [
00:22:32.583      {
00:22:32.583        "name": "thread2",
00:22:32.583        "id": 2,
00:22:32.583        "cpumask": "4",
00:22:32.583        "elapsed": 19726932918
00:22:32.583      }
00:22:32.583    ]
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 3,
00:22:32.583    "busy": 0,
00:22:32.583    "idle": 1972578452,
00:22:32.583    "in_interrupt": true,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": []
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 4,
00:22:32.583    "busy": 0,
00:22:32.583    "idle": 1972961296,
00:22:32.583    "in_interrupt": true,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": []
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 37,
00:22:32.583    "busy": 0,
00:22:32.583    "idle": 1969024806,
00:22:32.583    "in_interrupt": true,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": []
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 38,
00:22:32.583    "busy": 0,
00:22:32.583    "idle": 1969312040,
00:22:32.583    "in_interrupt": true,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": []
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 39,
00:22:32.583    "busy": 0,
00:22:32.583    "idle": 1960580738,
00:22:32.583    "in_interrupt": true,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": []
00:22:32.583  }
00:22:32.583  {
00:22:32.583    "lcore": 40,
00:22:32.583    "busy": 0,
00:22:32.583    "idle": 1960681164,
00:22:32.583    "in_interrupt": true,
00:22:32.583    "core_freq": 2300,
00:22:32.583    "lw_threads": []
00:22:32.583  }'
00:22:32.583    00:53:21	-- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 2) | .lw_threads[] | select(.name == "thread2")'
00:22:32.583   00:53:21	-- scheduler/interrupt.sh@57 -- # [[ -n {
00:22:32.583    "name": "thread2",
00:22:32.583    "id": 2,
00:22:32.583    "cpumask": "4",
00:22:32.583    "elapsed": 19726932918
00:22:32.583  } ]]
00:22:32.583   00:53:21	-- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 ))
00:22:32.583   00:53:21	-- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}"
00:22:32.583     00:53:21	-- scheduler/interrupt.sh@54 -- # mask_cpus 3
00:22:32.583      00:53:21	-- scheduler/common.sh@166 -- # fold_array_onto_string 3
00:22:32.583      00:53:21	-- scheduler/common.sh@27 -- # cpus=('3')
00:22:32.583      00:53:21	-- scheduler/common.sh@27 -- # local cpus
00:22:32.583      00:53:21	-- scheduler/common.sh@29 -- # local IFS=,
00:22:32.583      00:53:21	-- scheduler/common.sh@30 -- # echo 3
00:22:32.583     00:53:21	-- scheduler/common.sh@166 -- # printf '[%s]\n' 3
00:22:32.583    00:53:21	-- scheduler/interrupt.sh@54 -- # create_thread -n thread3 -m '[3]' -a 100
00:22:32.583    00:53:21	-- scheduler/common.sh@471 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread3 -m '[3]' -a 100
00:22:32.583    00:53:21	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:32.583    00:53:21	-- common/autotest_common.sh@10 -- # set +x
00:22:32.842    00:53:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:32.842   00:53:21	-- scheduler/interrupt.sh@54 -- # threads[cpu]=3
00:22:32.842   00:53:21	-- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu")
00:22:32.842   00:53:21	-- scheduler/interrupt.sh@55 -- # collect_cpu_idle
00:22:32.842   00:53:21	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:32.842   00:53:21	-- scheduler/common.sh@628 -- # local time=5
00:22:32.842   00:53:21	-- scheduler/common.sh@629 -- # local cpu
00:22:32.842   00:53:21	-- scheduler/common.sh@630 -- # local samples
00:22:32.842   00:53:21	-- scheduler/common.sh@631 -- # is_idle=()
00:22:32.842   00:53:21	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:32.842   00:53:21	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5
00:22:32.842  Collecting cpu idle stats (cpus: 3) for 5 seconds...
00:22:32.842   00:53:21	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 3
00:22:32.842   00:53:21	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:32.842   00:53:21	-- common/autotest_common.sh@10 -- # set +x
00:22:39.409   00:53:27	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:39.409   00:53:27	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:39.409   00:53:27	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:39.409    00:53:27	-- scheduler/common.sh@641 -- # calc_median 78 0 0 0 0
00:22:39.409    00:53:27	-- scheduler/common.sh@727 -- # samples=('78' '0' '0' '0' '0')
00:22:39.409    00:53:27	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:39.409    00:53:27	-- scheduler/common.sh@728 -- # local middle median sample
00:22:39.409    00:53:27	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:39.409     00:53:27	-- scheduler/common.sh@730 -- # printf '%s\n' 78 0 0 0 0
00:22:39.409     00:53:27	-- scheduler/common.sh@730 -- # sort -n
00:22:39.409    00:53:27	-- scheduler/common.sh@732 -- # middle=2
00:22:39.409    00:53:27	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:39.409    00:53:27	-- scheduler/common.sh@736 -- # median=0
00:22:39.409    00:53:27	-- scheduler/common.sh@739 -- # echo 0
00:22:39.409   00:53:27	-- scheduler/common.sh@641 -- # load_median=0
00:22:39.409   00:53:27	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '78 0 0 0 0' 15 0
00:22:39.409  * cpu3 idle samples: 78 0 0 0 0 (avg: 15%, median: 0%)
00:22:39.409    00:53:27	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 3 user
00:22:39.409    00:53:27	-- scheduler/common.sh@678 -- # local cpu=3 time=user
00:22:39.409    00:53:27	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:39.409    00:53:27	-- scheduler/common.sh@682 -- # [[ -v raw_samples_3 ]]
00:22:39.409    00:53:27	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_3
00:22:39.409    00:53:27	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:39.409    00:53:27	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:39.409    00:53:27	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:39.409    00:53:27	-- scheduler/common.sh@690 -- # case "$time" in
00:22:39.409    00:53:27	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:39.409     00:53:27	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:39.409    00:53:27	-- scheduler/common.sh@697 -- # usage=100
00:22:39.409    00:53:27	-- scheduler/common.sh@698 -- # usage=100
00:22:39.409    00:53:27	-- scheduler/common.sh@700 -- # printf %u 100
00:22:39.409    00:53:27	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 3 user 100
00:22:39.409  * cpu3 user usage: 100
00:22:39.409    00:53:27	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 3 '62103 62204 62304 62405 62505'
00:22:39.409  * cpu3 user samples: 62103 62204 62304 62405 62505
00:22:39.409    00:53:27	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 3 '6 6 6 6 6'
00:22:39.409  * cpu3 nice samples: 6 6 6 6 6
00:22:39.409    00:53:27	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 3 '4961 4961 4961 4961 4961'
00:22:39.409  * cpu3 system samples: 4961 4961 4961 4961 4961
00:22:39.409   00:53:27	-- scheduler/common.sh@652 -- # user_load=100
00:22:39.409   00:53:27	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:39.409   00:53:27	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:22:39.409   00:53:27	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 3
00:22:39.409  * cpu3 is not idle
00:22:39.409   00:53:27	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:22:39.409    00:53:27	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 3
00:22:39.409    00:53:27	-- scheduler/common.sh@747 -- # xtrace_disable
00:22:39.409    00:53:27	-- common/autotest_common.sh@10 -- # set +x
00:22:42.793  stime samples: 0 0 0 0
00:22:42.794  utime samples: 0 100 100 100
00:22:42.794   00:53:31	-- scheduler/common.sh@666 -- # user_spdk_load=100
00:22:42.794   00:53:31	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:22:42.794    00:53:31	-- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors
00:22:42.794    00:53:31	-- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]'
00:22:42.794    00:53:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:42.794    00:53:31	-- common/autotest_common.sh@10 -- # set +x
00:22:43.052    00:53:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:43.052   00:53:32	-- scheduler/interrupt.sh@56 -- # reactor_framework='{
00:22:43.052    "lcore": 1,
00:22:43.052    "busy": 3957117338,
00:22:43.052    "idle": 79219120594,
00:22:43.052    "in_interrupt": false,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": [
00:22:43.052      {
00:22:43.052        "name": "app_thread",
00:22:43.052        "id": 1,
00:22:43.052        "cpumask": "2",
00:22:43.052        "elapsed": 83214238648
00:22:43.052      }
00:22:43.052    ]
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 2,
00:22:43.052    "busy": 43477960014,
00:22:43.052    "idle": 2662467514,
00:22:43.052    "in_interrupt": false,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": [
00:22:43.052      {
00:22:43.052        "name": "thread2",
00:22:43.052        "id": 2,
00:22:43.052        "cpumask": "4",
00:22:43.052        "elapsed": 43438273026
00:22:43.052      }
00:22:43.052    ]
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 3,
00:22:43.052    "busy": 20703774372,
00:22:43.052    "idle": 2892176860,
00:22:43.052    "in_interrupt": false,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": [
00:22:43.052      {
00:22:43.052        "name": "thread3",
00:22:43.052        "id": 3,
00:22:43.052        "cpumask": "8",
00:22:43.052        "elapsed": 20433866966
00:22:43.052      }
00:22:43.052    ]
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 4,
00:22:43.052    "busy": 0,
00:22:43.052    "idle": 1972961296,
00:22:43.052    "in_interrupt": true,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": []
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 37,
00:22:43.052    "busy": 0,
00:22:43.052    "idle": 1969024806,
00:22:43.052    "in_interrupt": true,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": []
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 38,
00:22:43.052    "busy": 0,
00:22:43.052    "idle": 1969312040,
00:22:43.052    "in_interrupt": true,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": []
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 39,
00:22:43.052    "busy": 0,
00:22:43.052    "idle": 1960580738,
00:22:43.052    "in_interrupt": true,
00:22:43.052    "core_freq": 2300,
00:22:43.052    "lw_threads": []
00:22:43.052  }
00:22:43.052  {
00:22:43.052    "lcore": 40,
00:22:43.052    "busy": 0,
00:22:43.052    "idle": 1960681164,
00:22:43.053    "in_interrupt": true,
00:22:43.053    "core_freq": 2300,
00:22:43.053    "lw_threads": []
00:22:43.053  }'
00:22:43.053    00:53:32	-- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 3) | .lw_threads[] | select(.name == "thread3")'
00:22:43.053   00:53:32	-- scheduler/interrupt.sh@57 -- # [[ -n {
00:22:43.053    "name": "thread3",
00:22:43.053    "id": 3,
00:22:43.053    "cpumask": "8",
00:22:43.053    "elapsed": 20433866966
00:22:43.053  } ]]
00:22:43.053   00:53:32	-- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 ))
00:22:43.053   00:53:32	-- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}"
00:22:43.053     00:53:32	-- scheduler/interrupt.sh@54 -- # mask_cpus 4
00:22:43.053      00:53:32	-- scheduler/common.sh@166 -- # fold_array_onto_string 4
00:22:43.053      00:53:32	-- scheduler/common.sh@27 -- # cpus=('4')
00:22:43.053      00:53:32	-- scheduler/common.sh@27 -- # local cpus
00:22:43.053      00:53:32	-- scheduler/common.sh@29 -- # local IFS=,
00:22:43.053      00:53:32	-- scheduler/common.sh@30 -- # echo 4
00:22:43.053     00:53:32	-- scheduler/common.sh@166 -- # printf '[%s]\n' 4
00:22:43.053    00:53:32	-- scheduler/interrupt.sh@54 -- # create_thread -n thread4 -m '[4]' -a 100
00:22:43.053    00:53:32	-- scheduler/common.sh@471 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread4 -m '[4]' -a 100
00:22:43.053    00:53:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:43.053    00:53:32	-- common/autotest_common.sh@10 -- # set +x
00:22:43.053    00:53:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:43.053   00:53:32	-- scheduler/interrupt.sh@54 -- # threads[cpu]=4
00:22:43.053   00:53:32	-- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu")
00:22:43.053   00:53:32	-- scheduler/interrupt.sh@55 -- # collect_cpu_idle
00:22:43.053   00:53:32	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:43.053   00:53:32	-- scheduler/common.sh@628 -- # local time=5
00:22:43.053   00:53:32	-- scheduler/common.sh@629 -- # local cpu
00:22:43.053   00:53:32	-- scheduler/common.sh@630 -- # local samples
00:22:43.053   00:53:32	-- scheduler/common.sh@631 -- # is_idle=()
00:22:43.053   00:53:32	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:43.053   00:53:32	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5
00:22:43.053  Collecting cpu idle stats (cpus: 4) for 5 seconds...
00:22:43.053   00:53:32	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 4
00:22:43.053   00:53:32	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:43.053   00:53:32	-- common/autotest_common.sh@10 -- # set +x
00:22:49.613   00:53:38	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:49.613   00:53:38	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:49.613   00:53:38	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:49.613    00:53:38	-- scheduler/common.sh@641 -- # calc_median 38 0 0 0 0
00:22:49.613    00:53:38	-- scheduler/common.sh@727 -- # samples=('38' '0' '0' '0' '0')
00:22:49.613    00:53:38	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:49.613    00:53:38	-- scheduler/common.sh@728 -- # local middle median sample
00:22:49.613    00:53:38	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:49.613     00:53:38	-- scheduler/common.sh@730 -- # printf '%s\n' 38 0 0 0 0
00:22:49.613     00:53:38	-- scheduler/common.sh@730 -- # sort -n
00:22:49.613    00:53:38	-- scheduler/common.sh@732 -- # middle=2
00:22:49.613    00:53:38	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:49.613    00:53:38	-- scheduler/common.sh@736 -- # median=0
00:22:49.613    00:53:38	-- scheduler/common.sh@739 -- # echo 0
00:22:49.613   00:53:38	-- scheduler/common.sh@641 -- # load_median=0
00:22:49.613   00:53:38	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '38 0 0 0 0' 7 0
00:22:49.613  * cpu4 idle samples: 38 0 0 0 0 (avg: 7%, median: 0%)
00:22:49.613    00:53:38	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 4 user
00:22:49.613    00:53:38	-- scheduler/common.sh@678 -- # local cpu=4 time=user
00:22:49.613    00:53:38	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:49.613    00:53:38	-- scheduler/common.sh@682 -- # [[ -v raw_samples_4 ]]
00:22:49.613    00:53:38	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_4
00:22:49.613    00:53:38	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:49.613    00:53:38	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:49.613    00:53:38	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:49.613    00:53:38	-- scheduler/common.sh@690 -- # case "$time" in
00:22:49.613    00:53:38	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:49.613     00:53:38	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:49.613    00:53:38	-- scheduler/common.sh@697 -- # usage=101
00:22:49.613    00:53:38	-- scheduler/common.sh@698 -- # usage=100
00:22:49.613    00:53:38	-- scheduler/common.sh@700 -- # printf %u 100
00:22:49.613    00:53:38	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 4 user 100
00:22:49.613  * cpu4 user usage: 100
00:22:49.613    00:53:38	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 4 '25705 25805 25906 26006 26107'
00:22:49.613  * cpu4 user samples: 25705 25805 25906 26006 26107
00:22:49.613    00:53:38	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0'
00:22:49.613  * cpu4 nice samples: 0 0 0 0 0
00:22:49.613    00:53:38	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 4 '4804 4804 4804 4804 4804'
00:22:49.613  * cpu4 system samples: 4804 4804 4804 4804 4804
00:22:49.613   00:53:38	-- scheduler/common.sh@652 -- # user_load=100
00:22:49.613   00:53:38	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:49.613   00:53:38	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:22:49.613   00:53:38	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 4
00:22:49.613  * cpu4 is not idle
00:22:49.613   00:53:38	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:22:49.613    00:53:38	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 4
00:22:49.613    00:53:38	-- scheduler/common.sh@747 -- # xtrace_disable
00:22:49.613    00:53:38	-- common/autotest_common.sh@10 -- # set +x
00:22:53.801  stime samples: 0 0 0 0
00:22:53.801  utime samples: 0 100 100 100
00:22:53.801   00:53:42	-- scheduler/common.sh@666 -- # user_spdk_load=100
00:22:53.801   00:53:42	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:22:53.801    00:53:42	-- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors
00:22:53.801    00:53:42	-- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]'
00:22:53.801    00:53:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:53.801    00:53:42	-- common/autotest_common.sh@10 -- # set +x
00:22:53.801    00:53:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:53.801   00:53:42	-- scheduler/interrupt.sh@56 -- # reactor_framework='{
00:22:53.801    "lcore": 1,
00:22:53.801    "busy": 3985554162,
00:22:53.801    "idle": 103131704996,
00:22:53.801    "in_interrupt": false,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": [
00:22:53.801      {
00:22:53.801        "name": "app_thread",
00:22:53.801        "id": 1,
00:22:53.801        "cpumask": "2",
00:22:53.801        "elapsed": 107155269808
00:22:53.801      }
00:22:53.801    ]
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 2,
00:22:53.801    "busy": 67402397292,
00:22:53.801    "idle": 2662467514,
00:22:53.801    "in_interrupt": false,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": [
00:22:53.801      {
00:22:53.801        "name": "thread2",
00:22:53.801        "id": 2,
00:22:53.801        "cpumask": "4",
00:22:53.801        "elapsed": 67379304186
00:22:53.801      }
00:22:53.801    ]
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 3,
00:22:53.801    "busy": 44398364784,
00:22:53.801    "idle": 2892176860,
00:22:53.801    "in_interrupt": false,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": [
00:22:53.801      {
00:22:53.801        "name": "thread3",
00:22:53.801        "id": 3,
00:22:53.801        "cpumask": "8",
00:22:53.801        "elapsed": 44374898126
00:22:53.801      }
00:22:53.801    ]
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 4,
00:22:53.801    "busy": 21624034266,
00:22:53.801    "idle": 2892757614,
00:22:53.801    "in_interrupt": false,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": [
00:22:53.801      {
00:22:53.801        "name": "thread4",
00:22:53.801        "id": 4,
00:22:53.801        "cpumask": "10",
00:22:53.801        "elapsed": 21370365326
00:22:53.801      }
00:22:53.801    ]
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 37,
00:22:53.801    "busy": 0,
00:22:53.801    "idle": 1969024806,
00:22:53.801    "in_interrupt": true,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": []
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 38,
00:22:53.801    "busy": 0,
00:22:53.801    "idle": 1969312040,
00:22:53.801    "in_interrupt": true,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": []
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 39,
00:22:53.801    "busy": 0,
00:22:53.801    "idle": 1960580738,
00:22:53.801    "in_interrupt": true,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": []
00:22:53.801  }
00:22:53.801  {
00:22:53.801    "lcore": 40,
00:22:53.801    "busy": 0,
00:22:53.801    "idle": 1960681164,
00:22:53.801    "in_interrupt": true,
00:22:53.801    "core_freq": 2300,
00:22:53.801    "lw_threads": []
00:22:53.801  }'
00:22:53.801    00:53:42	-- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 4) | .lw_threads[] | select(.name == "thread4")'
00:22:53.801   00:53:42	-- scheduler/interrupt.sh@57 -- # [[ -n {
00:22:53.801    "name": "thread4",
00:22:53.801    "id": 4,
00:22:53.801    "cpumask": "10",
00:22:53.801    "elapsed": 21370365326
00:22:53.801  } ]]
00:22:53.801   00:53:42	-- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 ))
00:22:53.801   00:53:42	-- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}"
00:22:53.801   00:53:42	-- scheduler/interrupt.sh@64 -- # active_thread 2 0
00:22:53.801   00:53:42	-- scheduler/common.sh@479 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 2 0
00:22:53.801   00:53:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:53.801   00:53:42	-- common/autotest_common.sh@10 -- # set +x
00:22:53.801   00:53:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:53.802   00:53:42	-- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu")
00:22:53.802   00:53:42	-- scheduler/interrupt.sh@66 -- # collect_cpu_idle
00:22:53.802   00:53:42	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:53.802   00:53:42	-- scheduler/common.sh@628 -- # local time=5
00:22:53.802   00:53:42	-- scheduler/common.sh@629 -- # local cpu
00:22:53.802   00:53:42	-- scheduler/common.sh@630 -- # local samples
00:22:53.802   00:53:42	-- scheduler/common.sh@631 -- # is_idle=()
00:22:53.802   00:53:42	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:53.802   00:53:42	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5
00:22:53.802  Collecting cpu idle stats (cpus: 2) for 5 seconds...
00:22:53.802   00:53:42	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 2
00:22:53.802   00:53:42	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:53.802   00:53:42	-- common/autotest_common.sh@10 -- # set +x
00:23:00.363   00:53:48	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:23:00.363   00:53:48	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:23:00.363   00:53:48	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:23:00.363    00:53:48	-- scheduler/common.sh@641 -- # calc_median 0 0 82 100 100
00:23:00.363    00:53:48	-- scheduler/common.sh@727 -- # samples=('0' '0' '82' '100' '100')
00:23:00.363    00:53:48	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:23:00.363    00:53:48	-- scheduler/common.sh@728 -- # local middle median sample
00:23:00.363    00:53:48	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:23:00.363     00:53:48	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 82 100 100
00:23:00.364     00:53:48	-- scheduler/common.sh@730 -- # sort -n
00:23:00.364    00:53:48	-- scheduler/common.sh@732 -- # middle=2
00:23:00.364    00:53:48	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:23:00.364    00:53:48	-- scheduler/common.sh@736 -- # median=82
00:23:00.364    00:53:48	-- scheduler/common.sh@739 -- # echo 82
00:23:00.364   00:53:48	-- scheduler/common.sh@641 -- # load_median=82
00:23:00.364   00:53:48	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '0 0 82 100 100' 56 82
00:23:00.364  * cpu2 idle samples: 0 0 82 100 100 (avg: 56%, median: 82%)
00:23:00.364    00:53:48	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 2 user
00:23:00.364    00:53:48	-- scheduler/common.sh@678 -- # local cpu=2 time=user
00:23:00.364    00:53:48	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:23:00.364    00:53:48	-- scheduler/common.sh@682 -- # [[ -v raw_samples_2 ]]
00:23:00.364    00:53:48	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_2
00:23:00.364    00:53:48	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:23:00.364    00:53:48	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:23:00.364    00:53:48	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:23:00.364    00:53:48	-- scheduler/common.sh@690 -- # case "$time" in
00:23:00.364    00:53:48	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:23:00.364     00:53:48	-- scheduler/common.sh@691 -- # trap - ERR
00:23:00.364     00:53:48	-- scheduler/common.sh@691 -- # print_backtrace
00:23:00.364     00:53:48	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:23:00.364     00:53:48	-- common/autotest_common.sh@1142 -- # return 0
00:23:00.364     00:53:48	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:23:00.364    00:53:48	-- scheduler/common.sh@697 -- # usage=0
00:23:00.364    00:53:48	-- scheduler/common.sh@698 -- # usage=0
00:23:00.364    00:53:48	-- scheduler/common.sh@700 -- # printf %u 0
00:23:00.364    00:53:48	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 2 user 0
00:23:00.364  * cpu2 user usage: 0
00:23:00.364    00:53:48	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 2 '74469 74570 74588 74588 74588'
00:23:00.364  * cpu2 user samples: 74469 74570 74588 74588 74588
00:23:00.364    00:53:48	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0'
00:23:00.364  * cpu2 nice samples: 0 0 0 0 0
00:23:00.364    00:53:48	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 2 '5374 5374 5374 5374 5374'
00:23:00.364  * cpu2 system samples: 5374 5374 5374 5374 5374
00:23:00.364   00:53:48	-- scheduler/common.sh@652 -- # user_load=0
00:23:00.364   00:53:48	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:23:00.364   00:53:48	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 2
00:23:00.364  * cpu2 is idle
00:23:00.364   00:53:48	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:23:00.364    00:53:48	-- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors
00:23:00.364    00:53:48	-- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]'
00:23:00.364    00:53:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:00.364    00:53:48	-- common/autotest_common.sh@10 -- # set +x
00:23:00.364    00:53:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@67 -- # reactor_framework='{
00:23:00.364    "lcore": 1,
00:23:00.364    "busy": 4003002018,
00:23:00.364    "idle": 117537139362,
00:23:00.364    "in_interrupt": false,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": [
00:23:00.364      {
00:23:00.364        "name": "app_thread",
00:23:00.364        "id": 1,
00:23:00.364        "cpumask": "2",
00:23:00.364        "elapsed": 121578128914
00:23:00.364      },
00:23:00.364      {
00:23:00.364        "name": "thread2",
00:23:00.364        "id": 2,
00:23:00.364        "cpumask": "4",
00:23:00.364        "elapsed": 11408148192
00:23:00.364      }
00:23:00.364    ]
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 2,
00:23:00.364    "busy": 67862775200,
00:23:00.364    "idle": 7724098798,
00:23:00.364    "in_interrupt": true,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": []
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 3,
00:23:00.364    "busy": 58891039170,
00:23:00.364    "idle": 2892176860,
00:23:00.364    "in_interrupt": false,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": [
00:23:00.364      {
00:23:00.364        "name": "thread3",
00:23:00.364        "id": 3,
00:23:00.364        "cpumask": "8",
00:23:00.364        "elapsed": 58797757232
00:23:00.364      }
00:23:00.364    ]
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 4,
00:23:00.364    "busy": 35886752734,
00:23:00.364    "idle": 2892757614,
00:23:00.364    "in_interrupt": false,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": [
00:23:00.364      {
00:23:00.364        "name": "thread4",
00:23:00.364        "id": 4,
00:23:00.364        "cpumask": "10",
00:23:00.364        "elapsed": 35793224432
00:23:00.364      }
00:23:00.364    ]
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 37,
00:23:00.364    "busy": 0,
00:23:00.364    "idle": 1969024806,
00:23:00.364    "in_interrupt": true,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": []
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 38,
00:23:00.364    "busy": 0,
00:23:00.364    "idle": 1969312040,
00:23:00.364    "in_interrupt": true,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": []
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 39,
00:23:00.364    "busy": 0,
00:23:00.364    "idle": 1960580738,
00:23:00.364    "in_interrupt": true,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": []
00:23:00.364  }
00:23:00.364  {
00:23:00.364    "lcore": 40,
00:23:00.364    "busy": 0,
00:23:00.364    "idle": 1960681164,
00:23:00.364    "in_interrupt": true,
00:23:00.364    "core_freq": 2300,
00:23:00.364    "lw_threads": []
00:23:00.364  }'
00:23:00.364    00:53:48	-- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id'
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@68 -- # [[ -z '' ]]
00:23:00.364    00:53:48	-- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread2")'
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@69 -- # [[ -n {
00:23:00.364    "name": "thread2",
00:23:00.364    "id": 2,
00:23:00.364    "cpumask": "4",
00:23:00.364    "elapsed": 11408148192
00:23:00.364  } ]]
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 ))
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}"
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@64 -- # active_thread 3 0
00:23:00.364   00:53:48	-- scheduler/common.sh@479 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 3 0
00:23:00.364   00:53:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:00.364   00:53:48	-- common/autotest_common.sh@10 -- # set +x
00:23:00.364   00:53:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu")
00:23:00.364   00:53:48	-- scheduler/interrupt.sh@66 -- # collect_cpu_idle
00:23:00.364   00:53:48	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:23:00.364   00:53:48	-- scheduler/common.sh@628 -- # local time=5
00:23:00.364   00:53:48	-- scheduler/common.sh@629 -- # local cpu
00:23:00.364   00:53:48	-- scheduler/common.sh@630 -- # local samples
00:23:00.364   00:53:48	-- scheduler/common.sh@631 -- # is_idle=()
00:23:00.364   00:53:48	-- scheduler/common.sh@631 -- # local -g is_idle
00:23:00.364   00:53:48	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5
00:23:00.364  Collecting cpu idle stats (cpus: 3) for 5 seconds...
00:23:00.364   00:53:48	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 3
00:23:00.364   00:53:48	-- scheduler/common.sh@483 -- # xtrace_disable
00:23:00.364   00:53:48	-- common/autotest_common.sh@10 -- # set +x
00:23:06.926   00:53:54	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:23:06.927   00:53:54	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:23:06.927   00:53:54	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:23:06.927    00:53:54	-- scheduler/common.sh@641 -- # calc_median 0 1 100 100 100
00:23:06.927    00:53:54	-- scheduler/common.sh@727 -- # samples=('0' '1' '100' '100' '100')
00:23:06.927    00:53:54	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:23:06.927    00:53:54	-- scheduler/common.sh@728 -- # local middle median sample
00:23:06.927    00:53:54	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:23:06.927     00:53:54	-- scheduler/common.sh@730 -- # printf '%s\n' 0 1 100 100 100
00:23:06.927     00:53:54	-- scheduler/common.sh@730 -- # sort -n
00:23:06.927    00:53:54	-- scheduler/common.sh@732 -- # middle=2
00:23:06.927    00:53:54	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:23:06.927    00:53:54	-- scheduler/common.sh@736 -- # median=100
00:23:06.927    00:53:54	-- scheduler/common.sh@739 -- # echo 100
00:23:06.927   00:53:54	-- scheduler/common.sh@641 -- # load_median=100
00:23:06.927   00:53:54	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '0 1 100 100 100' 60 100
00:23:06.927  * cpu3 idle samples: 0 1 100 100 100 (avg: 60%, median: 100%)
00:23:06.927    00:53:54	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 3 user
00:23:06.927    00:53:54	-- scheduler/common.sh@678 -- # local cpu=3 time=user
00:23:06.927    00:53:54	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:23:06.927    00:53:54	-- scheduler/common.sh@682 -- # [[ -v raw_samples_3 ]]
00:23:06.927    00:53:54	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_3
00:23:06.927    00:53:54	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:23:06.927    00:53:54	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:23:06.927    00:53:54	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:23:06.927    00:53:54	-- scheduler/common.sh@690 -- # case "$time" in
00:23:06.927    00:53:54	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:23:06.927     00:53:54	-- scheduler/common.sh@691 -- # trap - ERR
00:23:06.927     00:53:54	-- scheduler/common.sh@691 -- # print_backtrace
00:23:06.927     00:53:54	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:23:06.927     00:53:54	-- common/autotest_common.sh@1142 -- # return 0
00:23:06.927     00:53:54	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:23:06.927    00:53:54	-- scheduler/common.sh@697 -- # usage=0
00:23:06.927    00:53:54	-- scheduler/common.sh@698 -- # usage=0
00:23:06.927    00:53:54	-- scheduler/common.sh@700 -- # printf %u 0
00:23:06.927    00:53:54	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 3 user 0
00:23:06.927  * cpu3 user usage: 0
00:23:06.927    00:53:54	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 3 '64793 64892 64892 64892 64892'
00:23:06.927  * cpu3 user samples: 64793 64892 64892 64892 64892
00:23:06.927    00:53:54	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 3 '6 6 6 6 6'
00:23:06.927  * cpu3 nice samples: 6 6 6 6 6
00:23:06.927    00:53:54	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 3 '4961 4961 4961 4961 4961'
00:23:06.927  * cpu3 system samples: 4961 4961 4961 4961 4961
00:23:06.927   00:53:54	-- scheduler/common.sh@652 -- # user_load=0
00:23:06.927   00:53:54	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:23:06.927   00:53:54	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 3
00:23:06.927  * cpu3 is idle
00:23:06.927   00:53:54	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:23:06.927    00:53:54	-- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors
00:23:06.927    00:53:54	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:06.927    00:53:54	-- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]'
00:23:06.927    00:53:54	-- common/autotest_common.sh@10 -- # set +x
00:23:06.927    00:53:54	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:06.927   00:53:54	-- scheduler/interrupt.sh@67 -- # reactor_framework='{
00:23:06.927    "lcore": 1,
00:23:06.927    "busy": 4020292704,
00:23:06.927    "idle": 131788376700,
00:23:06.927    "in_interrupt": false,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": [
00:23:06.927      {
00:23:06.927        "name": "app_thread",
00:23:06.927        "id": 1,
00:23:06.927        "cpumask": "2",
00:23:06.927        "elapsed": 135846635794
00:23:06.927      },
00:23:06.927      {
00:23:06.927        "name": "thread2",
00:23:06.927        "id": 2,
00:23:06.927        "cpumask": "4",
00:23:06.927        "elapsed": 25676655072
00:23:06.927      },
00:23:06.927      {
00:23:06.927        "name": "thread3",
00:23:06.927        "id": 3,
00:23:06.927        "cpumask": "8",
00:23:06.927        "elapsed": 11873983958
00:23:06.927      }
00:23:06.927    ]
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 2,
00:23:06.927    "busy": 67862775200,
00:23:06.927    "idle": 7724098798,
00:23:06.927    "in_interrupt": true,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": []
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 3,
00:23:06.927    "busy": 59121370722,
00:23:06.927    "idle": 7493480410,
00:23:06.927    "in_interrupt": true,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": []
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 4,
00:23:06.927    "busy": 50149468504,
00:23:06.927    "idle": 2892757614,
00:23:06.927    "in_interrupt": false,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": [
00:23:06.927      {
00:23:06.927        "name": "thread4",
00:23:06.927        "id": 4,
00:23:06.927        "cpumask": "10",
00:23:06.927        "elapsed": 50061731312
00:23:06.927      }
00:23:06.927    ]
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 37,
00:23:06.927    "busy": 0,
00:23:06.927    "idle": 1969024806,
00:23:06.927    "in_interrupt": true,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": []
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 38,
00:23:06.927    "busy": 0,
00:23:06.927    "idle": 1969312040,
00:23:06.927    "in_interrupt": true,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": []
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 39,
00:23:06.927    "busy": 0,
00:23:06.927    "idle": 1960580738,
00:23:06.927    "in_interrupt": true,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": []
00:23:06.927  }
00:23:06.927  {
00:23:06.927    "lcore": 40,
00:23:06.927    "busy": 0,
00:23:06.927    "idle": 1960681164,
00:23:06.927    "in_interrupt": true,
00:23:06.927    "core_freq": 2300,
00:23:06.927    "lw_threads": []
00:23:06.927  }'
00:23:06.927    00:53:54	-- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id'
00:23:06.927   00:53:54	-- scheduler/interrupt.sh@68 -- # [[ -z '' ]]
00:23:06.927    00:53:54	-- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread3")'
00:23:06.927   00:53:55	-- scheduler/interrupt.sh@69 -- # [[ -n {
00:23:06.927    "name": "thread3",
00:23:06.927    "id": 3,
00:23:06.927    "cpumask": "8",
00:23:06.927    "elapsed": 11873983958
00:23:06.927  } ]]
00:23:06.927   00:53:55	-- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 ))
00:23:06.927   00:53:55	-- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}"
00:23:06.927   00:53:55	-- scheduler/interrupt.sh@64 -- # active_thread 4 0
00:23:06.927   00:53:55	-- scheduler/common.sh@479 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 4 0
00:23:06.927   00:53:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:06.927   00:53:55	-- common/autotest_common.sh@10 -- # set +x
00:23:06.927   00:53:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:06.927   00:53:55	-- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu")
00:23:06.927   00:53:55	-- scheduler/interrupt.sh@66 -- # collect_cpu_idle
00:23:06.927   00:53:55	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:23:06.927   00:53:55	-- scheduler/common.sh@628 -- # local time=5
00:23:06.927   00:53:55	-- scheduler/common.sh@629 -- # local cpu
00:23:06.927   00:53:55	-- scheduler/common.sh@630 -- # local samples
00:23:06.927   00:53:55	-- scheduler/common.sh@631 -- # is_idle=()
00:23:06.927   00:53:55	-- scheduler/common.sh@631 -- # local -g is_idle
00:23:06.927   00:53:55	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5
00:23:06.927  Collecting cpu idle stats (cpus: 4) for 5 seconds...
00:23:06.927   00:53:55	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 4
00:23:06.927   00:53:55	-- scheduler/common.sh@483 -- # xtrace_disable
00:23:06.927   00:53:55	-- common/autotest_common.sh@10 -- # set +x
00:23:12.196   00:54:01	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:23:12.196   00:54:01	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:23:12.196   00:54:01	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:23:12.196    00:54:01	-- scheduler/common.sh@641 -- # calc_median 0 0 32 100 100
00:23:12.196    00:54:01	-- scheduler/common.sh@727 -- # samples=('0' '0' '32' '100' '100')
00:23:12.196    00:54:01	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:23:12.196    00:54:01	-- scheduler/common.sh@728 -- # local middle median sample
00:23:12.196    00:54:01	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:23:12.196     00:54:01	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 32 100 100
00:23:12.196     00:54:01	-- scheduler/common.sh@730 -- # sort -n
00:23:12.196    00:54:01	-- scheduler/common.sh@732 -- # middle=2
00:23:12.196    00:54:01	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:23:12.196    00:54:01	-- scheduler/common.sh@736 -- # median=32
00:23:12.196    00:54:01	-- scheduler/common.sh@739 -- # echo 32
00:23:12.196   00:54:01	-- scheduler/common.sh@641 -- # load_median=32
00:23:12.196   00:54:01	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '0 0 32 100 100' 46 32
00:23:12.196  * cpu4 idle samples: 0 0 32 100 100 (avg: 46%, median: 32%)
00:23:12.196    00:54:01	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 4 user
00:23:12.196    00:54:01	-- scheduler/common.sh@678 -- # local cpu=4 time=user
00:23:12.196    00:54:01	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:23:12.196    00:54:01	-- scheduler/common.sh@682 -- # [[ -v raw_samples_4 ]]
00:23:12.196    00:54:01	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_4
00:23:12.196    00:54:01	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:23:12.196    00:54:01	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:23:12.196    00:54:01	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:23:12.196    00:54:01	-- scheduler/common.sh@690 -- # case "$time" in
00:23:12.196    00:54:01	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:23:12.196     00:54:01	-- scheduler/common.sh@691 -- # trap - ERR
00:23:12.196     00:54:01	-- scheduler/common.sh@691 -- # print_backtrace
00:23:12.196     00:54:01	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:23:12.196     00:54:01	-- common/autotest_common.sh@1142 -- # return 0
00:23:12.196     00:54:01	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:23:12.196    00:54:01	-- scheduler/common.sh@697 -- # usage=0
00:23:12.196    00:54:01	-- scheduler/common.sh@698 -- # usage=0
00:23:12.196    00:54:01	-- scheduler/common.sh@700 -- # printf %u 0
00:23:12.196    00:54:01	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 4 user 0
00:23:12.196  * cpu4 user usage: 0
00:23:12.196    00:54:01	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 4 '27974 28075 28142 28142 28142'
00:23:12.196  * cpu4 user samples: 27974 28075 28142 28142 28142
00:23:12.196    00:54:01	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0'
00:23:12.196  * cpu4 nice samples: 0 0 0 0 0
00:23:12.196    00:54:01	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 4 '4804 4804 4804 4804 4804'
00:23:12.196  * cpu4 system samples: 4804 4804 4804 4804 4804
00:23:12.196   00:54:01	-- scheduler/common.sh@652 -- # user_load=0
00:23:12.196   00:54:01	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:23:12.196   00:54:01	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 4
00:23:12.196  * cpu4 is idle
00:23:12.196   00:54:01	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:23:12.196    00:54:01	-- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors
00:23:12.196    00:54:01	-- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]'
00:23:12.196    00:54:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:12.196    00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.196    00:54:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@67 -- # reactor_framework='{
00:23:12.196    "lcore": 1,
00:23:12.196    "busy": 4043464540,
00:23:12.196    "idle": 146027025434,
00:23:12.196    "in_interrupt": false,
00:23:12.196    "core_freq": 1900,
00:23:12.196    "lw_threads": [
00:23:12.196      {
00:23:12.196        "name": "app_thread",
00:23:12.196        "id": 1,
00:23:12.196        "cpumask": "2",
00:23:12.196        "elapsed": 150108401472
00:23:12.196      },
00:23:12.196      {
00:23:12.196        "name": "thread2",
00:23:12.196        "id": 2,
00:23:12.196        "cpumask": "4",
00:23:12.196        "elapsed": 39938420750
00:23:12.196      },
00:23:12.196      {
00:23:12.196        "name": "thread3",
00:23:12.196        "id": 3,
00:23:12.196        "cpumask": "8",
00:23:12.196        "elapsed": 26135749636
00:23:12.196      },
00:23:12.196      {
00:23:12.196        "name": "thread4",
00:23:12.196        "id": 4,
00:23:12.196        "cpumask": "10",
00:23:12.196        "elapsed": 10051759640
00:23:12.196      }
00:23:12.196    ]
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 2,
00:23:12.196    "busy": 67862775200,
00:23:12.196    "idle": 7724098798,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 3,
00:23:12.196    "busy": 59121370722,
00:23:12.196    "idle": 7493480410,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 4,
00:23:12.196    "busy": 50379810632,
00:23:12.196    "idle": 9085009284,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 37,
00:23:12.196    "busy": 0,
00:23:12.196    "idle": 1969024806,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 38,
00:23:12.196    "busy": 0,
00:23:12.196    "idle": 1969312040,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 39,
00:23:12.196    "busy": 0,
00:23:12.196    "idle": 1960580738,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }
00:23:12.196  {
00:23:12.196    "lcore": 40,
00:23:12.196    "busy": 0,
00:23:12.196    "idle": 1960681164,
00:23:12.196    "in_interrupt": true,
00:23:12.196    "core_freq": 2300,
00:23:12.196    "lw_threads": []
00:23:12.196  }'
00:23:12.196    00:54:01	-- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id'
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@68 -- # [[ -z '' ]]
00:23:12.196    00:54:01	-- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread4")'
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@69 -- # [[ -n {
00:23:12.196    "name": "thread4",
00:23:12.196    "id": 4,
00:23:12.196    "cpumask": "10",
00:23:12.196    "elapsed": 10051759640
00:23:12.196  } ]]
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 ))
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}"
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@74 -- # destroy_thread 2
00:23:12.196   00:54:01	-- scheduler/common.sh@475 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 2
00:23:12.196   00:54:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:12.196   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.196   00:54:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}"
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@74 -- # destroy_thread 3
00:23:12.196   00:54:01	-- scheduler/common.sh@475 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 3
00:23:12.196   00:54:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:12.196   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.196   00:54:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}"
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@74 -- # destroy_thread 4
00:23:12.196   00:54:01	-- scheduler/common.sh@475 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 4
00:23:12.196   00:54:01	-- common/autotest_common.sh@561 -- # xtrace_disable
00:23:12.196   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.196   00:54:01	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:23:12.196   00:54:01	-- scheduler/interrupt.sh@1 -- # killprocess 1078916
00:23:12.196   00:54:01	-- common/autotest_common.sh@936 -- # '[' -z 1078916 ']'
00:23:12.196   00:54:01	-- common/autotest_common.sh@940 -- # kill -0 1078916
00:23:12.196    00:54:01	-- common/autotest_common.sh@941 -- # uname
00:23:12.196   00:54:01	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:23:12.197    00:54:01	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1078916
00:23:12.197   00:54:01	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:23:12.197   00:54:01	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:23:12.197   00:54:01	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 1078916'
00:23:12.197  killing process with pid 1078916
00:23:12.197   00:54:01	-- common/autotest_common.sh@955 -- # kill 1078916
00:23:12.197  [2024-12-17 00:54:01.295445] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:23:12.197   00:54:01	-- common/autotest_common.sh@960 -- # wait 1078916
00:23:12.455  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:23:12.455  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:23:12.455  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:23:12.455  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:23:12.455  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:23:12.455  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:23:12.455  POWER: Power management governor of lcore 4 has been set to 'powersave' successfully
00:23:12.455  POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original
00:23:12.455  POWER: Power management governor of lcore 37 has been set to 'powersave' successfully
00:23:12.455  POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original
00:23:12.455  POWER: Power management governor of lcore 38 has been set to 'powersave' successfully
00:23:12.455  POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original
00:23:12.456  POWER: Power management governor of lcore 39 has been set to 'powersave' successfully
00:23:12.456  POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original
00:23:12.456  POWER: Power management governor of lcore 40 has been set to 'powersave' successfully
00:23:12.456  POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original
00:23:12.456  
00:23:12.456  real	1m6.294s
00:23:12.456  user	2m38.240s
00:23:12.456  sys	0m1.157s
00:23:12.456   00:54:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:12.456   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.456  ************************************
00:23:12.456  END TEST interrupt_mode
00:23:12.456  ************************************
00:23:12.456   00:54:01	-- scheduler/scheduler.sh@1 -- # restore_cgroups
00:23:12.456   00:54:01	-- scheduler/isolate_cores.sh@11 -- # xtrace_disable
00:23:12.456   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.456  Moving 1070502 (PF_SUPERPRIV,PF_RANDOMIZE) to / from /cpuset
00:23:12.456  Moved 1 processes, failed 0
00:23:12.456  
00:23:12.456  real	1m40.155s
00:23:12.456  user	3m39.860s
00:23:12.456  sys	0m9.122s
00:23:12.456   00:54:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:23:12.456   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.456  ************************************
00:23:12.456  END TEST scheduler
00:23:12.456  ************************************
00:23:12.715   00:54:01	-- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]]
00:23:12.715   00:54:01	-- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]]
00:23:12.715   00:54:01	-- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]]
00:23:12.715   00:54:01	-- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:23:12.715   00:54:01	-- spdk/autotest.sh@372 -- # timing_enter post_cleanup
00:23:12.715   00:54:01	-- common/autotest_common.sh@722 -- # xtrace_disable
00:23:12.715   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:12.715   00:54:01	-- spdk/autotest.sh@373 -- # autotest_cleanup
00:23:12.715   00:54:01	-- common/autotest_common.sh@1381 -- # local autotest_es=0
00:23:12.715   00:54:01	-- common/autotest_common.sh@1382 -- # xtrace_disable
00:23:12.715   00:54:01	-- common/autotest_common.sh@10 -- # set +x
00:23:16.903  INFO: APP EXITING
00:23:16.903  INFO: killing all VMs
00:23:16.903  INFO: killing vhost app
00:23:16.903  INFO: EXIT DONE
00:23:20.188  Waiting for block devices as requested
00:23:20.188  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:23:20.188  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:23:20.188  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:23:20.188  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:23:20.188  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:23:20.447  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:23:20.447  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:23:20.447  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:23:20.706  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:23:20.706  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:23:20.706  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:23:20.966  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:23:20.966  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:23:20.966  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:23:21.225  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:23:21.225  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:23:21.225  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:23:24.512  Cleaning
00:23:24.512  Removing:    /var/run/dpdk/spdk0/config
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3
00:23:24.512  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:23:24.512  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:23:24.512  Removing:    /dev/shm/bdevperf_trace.pid1062877
00:23:24.512  Removing:    /dev/shm/spdk_tgt_trace.pid940008
00:23:24.512  Removing:    /var/run/dpdk/spdk0
00:23:24.512  Removing:    /var/run/dpdk/spdk_pid1002362
00:23:24.512  Removing:    /var/run/dpdk/spdk_pid1008190
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1012398
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1018813
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1024274
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1030931
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1032233
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1040084
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1053620
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1053840
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1057179
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1060483
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1061218
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1062031
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1062877
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1063236
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1064419
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1065538
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1066140
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1066927
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1067292
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1067690
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1071679
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1075009
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid1078916
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid937421
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid938625
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid940008
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid940640
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid941049
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid941381
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid941722
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid942096
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid942268
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid942442
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid942745
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid943291
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid945978
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid946320
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid946705
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid946886
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid947468
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid947647
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid948292
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid948406
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid948787
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid948969
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid949175
00:23:24.771  Removing:    /var/run/dpdk/spdk_pid949211
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid949891
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid950128
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid950390
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid950909
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid951008
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid951236
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid951431
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid951626
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid951805
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid952007
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid952186
00:23:24.772  Removing:    /var/run/dpdk/spdk_pid952387
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid952565
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid952763
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid952947
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid953146
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid953327
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid953529
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid953707
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid953920
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid954106
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid954345
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid954546
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid954782
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid954963
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid955205
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid955391
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid955590
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid955771
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid955966
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid956157
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid956350
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid956530
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid956732
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid956912
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid957111
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid957294
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid957491
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid957680
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid957880
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid958077
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid958330
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid958530
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid958768
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid958978
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid959185
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid959314
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid959627
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid960112
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid961298
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid962220
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid965059
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid966696
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid968388
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid969576
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid969608
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid969685
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid973827
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid974754
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid978290
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid979921
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid981568
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid982731
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid982837
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid982860
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid996091
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid997528
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid998316
00:23:25.031  Removing:    /var/run/dpdk/spdk_pid999218
00:23:25.031  Clean
00:23:25.289  killing process with pid 895128
00:23:31.852  killing process with pid 895125
00:23:31.852  killing process with pid 895127
00:23:31.852  killing process with pid 895126
00:23:31.852   00:54:20	-- common/autotest_common.sh@1446 -- # return 0
00:23:31.852   00:54:20	-- spdk/autotest.sh@374 -- # timing_exit post_cleanup
00:23:31.852   00:54:20	-- common/autotest_common.sh@728 -- # xtrace_disable
00:23:31.852   00:54:20	-- common/autotest_common.sh@10 -- # set +x
00:23:31.852   00:54:20	-- spdk/autotest.sh@376 -- # timing_exit autotest
00:23:31.852   00:54:20	-- common/autotest_common.sh@728 -- # xtrace_disable
00:23:31.852   00:54:20	-- common/autotest_common.sh@10 -- # set +x
00:23:31.853   00:54:20	-- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt
00:23:31.853   00:54:20	-- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log ]]
00:23:31.853   00:54:20	-- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log
00:23:31.853   00:54:20	-- spdk/autotest.sh@381 -- # [[ y == y ]]
00:23:31.853    00:54:20	-- spdk/autotest.sh@383 -- # hostname
00:23:31.853   00:54:20	-- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvme-phy-autotest/spdk -t spdk-wfp-45 -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info
00:23:31.853  geninfo: WARNING: invalid characters removed from testname!
00:23:53.793   00:54:40	-- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:53.793   00:54:42	-- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:55.172   00:54:44	-- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:57.709   00:54:46	-- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:24:00.248   00:54:49	-- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:24:02.785   00:54:51	-- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:24:05.321   00:54:54	-- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:24:05.321     00:54:54	-- common/autotest_common.sh@1689 -- $ [[ y == y ]]
00:24:05.321      00:54:54	-- common/autotest_common.sh@1690 -- $ lcov --version
00:24:05.321      00:54:54	-- common/autotest_common.sh@1690 -- $ awk '{print $NF}'
00:24:05.321     00:54:54	-- common/autotest_common.sh@1690 -- $ lt 1.15 2
00:24:05.321     00:54:54	-- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2
00:24:05.321     00:54:54	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:24:05.321     00:54:54	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:24:05.321     00:54:54	-- scripts/common.sh@335 -- $ IFS=.-:
00:24:05.321     00:54:54	-- scripts/common.sh@335 -- $ read -ra ver1
00:24:05.321     00:54:54	-- scripts/common.sh@336 -- $ IFS=.-:
00:24:05.321     00:54:54	-- scripts/common.sh@336 -- $ read -ra ver2
00:24:05.321     00:54:54	-- scripts/common.sh@337 -- $ local 'op=<'
00:24:05.321     00:54:54	-- scripts/common.sh@339 -- $ ver1_l=2
00:24:05.321     00:54:54	-- scripts/common.sh@340 -- $ ver2_l=1
00:24:05.321     00:54:54	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:24:05.321     00:54:54	-- scripts/common.sh@343 -- $ case "$op" in
00:24:05.321     00:54:54	-- scripts/common.sh@344 -- $ : 1
00:24:05.321     00:54:54	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:24:05.321     00:54:54	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:05.321      00:54:54	-- scripts/common.sh@364 -- $ decimal 1
00:24:05.321      00:54:54	-- scripts/common.sh@352 -- $ local d=1
00:24:05.321      00:54:54	-- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]]
00:24:05.321      00:54:54	-- scripts/common.sh@354 -- $ echo 1
00:24:05.321     00:54:54	-- scripts/common.sh@364 -- $ ver1[v]=1
00:24:05.321      00:54:54	-- scripts/common.sh@365 -- $ decimal 2
00:24:05.321      00:54:54	-- scripts/common.sh@352 -- $ local d=2
00:24:05.321      00:54:54	-- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]]
00:24:05.321      00:54:54	-- scripts/common.sh@354 -- $ echo 2
00:24:05.321     00:54:54	-- scripts/common.sh@365 -- $ ver2[v]=2
00:24:05.321     00:54:54	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:24:05.321     00:54:54	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:24:05.321     00:54:54	-- scripts/common.sh@367 -- $ return 0
00:24:05.321     00:54:54	-- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:05.321     00:54:54	-- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS=
00:24:05.321  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.321  		--rc genhtml_branch_coverage=1
00:24:05.321  		--rc genhtml_function_coverage=1
00:24:05.321  		--rc genhtml_legend=1
00:24:05.321  		--rc geninfo_all_blocks=1
00:24:05.321  		--rc geninfo_unexecuted_blocks=1
00:24:05.321  		
00:24:05.321  		'
00:24:05.321     00:54:54	-- common/autotest_common.sh@1703 -- $ LCOV_OPTS='
00:24:05.321  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.321  		--rc genhtml_branch_coverage=1
00:24:05.321  		--rc genhtml_function_coverage=1
00:24:05.321  		--rc genhtml_legend=1
00:24:05.321  		--rc geninfo_all_blocks=1
00:24:05.321  		--rc geninfo_unexecuted_blocks=1
00:24:05.321  		
00:24:05.321  		'
00:24:05.322     00:54:54	-- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 
00:24:05.322  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.322  		--rc genhtml_branch_coverage=1
00:24:05.322  		--rc genhtml_function_coverage=1
00:24:05.322  		--rc genhtml_legend=1
00:24:05.322  		--rc geninfo_all_blocks=1
00:24:05.322  		--rc geninfo_unexecuted_blocks=1
00:24:05.322  		
00:24:05.322  		'
00:24:05.322     00:54:54	-- common/autotest_common.sh@1704 -- $ LCOV='lcov 
00:24:05.322  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.322  		--rc genhtml_branch_coverage=1
00:24:05.322  		--rc genhtml_function_coverage=1
00:24:05.322  		--rc genhtml_legend=1
00:24:05.322  		--rc geninfo_all_blocks=1
00:24:05.322  		--rc geninfo_unexecuted_blocks=1
00:24:05.322  		
00:24:05.322  		'
00:24:05.322    00:54:54	-- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:24:05.322     00:54:54	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:24:05.322     00:54:54	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:05.322     00:54:54	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:05.322      00:54:54	-- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.322      00:54:54	-- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.322      00:54:54	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.322      00:54:54	-- paths/export.sh@5 -- $ export PATH
00:24:05.322      00:54:54	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.322    00:54:54	-- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output
00:24:05.322      00:54:54	-- common/autobuild_common.sh@440 -- $ date +%s
00:24:05.322     00:54:54	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734393294.XXXXXX
00:24:05.322    00:54:54	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734393294.mBiyfX
00:24:05.322    00:54:54	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:24:05.322    00:54:54	-- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']'
00:24:05.322     00:54:54	-- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvme-phy-autotest/dpdk/build
00:24:05.322    00:54:54	-- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvme-phy-autotest/dpdk'
00:24:05.322    00:54:54	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp'
00:24:05.322    00:54:54	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp  --exclude /var/jenkins/workspace/nvme-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:24:05.322     00:54:54	-- common/autobuild_common.sh@456 -- $ get_config_params
00:24:05.322     00:54:54	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:24:05.322     00:54:54	-- common/autotest_common.sh@10 -- $ set +x
00:24:05.322    00:54:54	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvme-phy-autotest/dpdk/build'
00:24:05.322   00:54:54	-- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72
00:24:05.322   00:54:54	-- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:24:05.322   00:54:54	-- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]]
00:24:05.322   00:54:54	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:24:05.322   00:54:54	-- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]]
00:24:05.322   00:54:54	-- spdk/autopackage.sh@19 -- $ timing_finish
00:24:05.322   00:54:54	-- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:24:05.322   00:54:54	-- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']'
00:24:05.322   00:54:54	-- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt
00:24:05.322   00:54:54	-- spdk/autopackage.sh@20 -- $ exit 0
00:24:05.322  + [[ -n 829931 ]]
00:24:05.322  + sudo kill 829931
00:24:05.331  [Pipeline] }
00:24:05.345  [Pipeline] // stage
00:24:05.349  [Pipeline] }
00:24:05.362  [Pipeline] // timeout
00:24:05.367  [Pipeline] }
00:24:05.380  [Pipeline] // catchError
00:24:05.384  [Pipeline] }
00:24:05.398  [Pipeline] // wrap
00:24:05.403  [Pipeline] }
00:24:05.415  [Pipeline] // catchError
00:24:05.422  [Pipeline] stage
00:24:05.424  [Pipeline] { (Epilogue)
00:24:05.435  [Pipeline] catchError
00:24:05.437  [Pipeline] {
00:24:05.448  [Pipeline] echo
00:24:05.450  Cleanup processes
00:24:05.455  [Pipeline] sh
00:24:05.741  + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:24:05.741  1096825 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:24:05.753  [Pipeline] sh
00:24:06.179  ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:24:06.179  ++ grep -v 'sudo pgrep'
00:24:06.179  ++ awk '{print $1}'
00:24:06.179  + sudo kill -9
00:24:06.179  + true
00:24:06.208  [Pipeline] sh
00:24:06.491  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:24:16.482  [Pipeline] sh
00:24:16.766  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:24:16.766  Artifacts sizes are good
00:24:16.779  [Pipeline] archiveArtifacts
00:24:16.785  Archiving artifacts
00:24:16.889  [Pipeline] sh
00:24:17.177  + sudo chown -R sys_sgci: /var/jenkins/workspace/nvme-phy-autotest
00:24:17.191  [Pipeline] cleanWs
00:24:17.199  [WS-CLEANUP] Deleting project workspace...
00:24:17.199  [WS-CLEANUP] Deferred wipeout is used...
00:24:17.206  [WS-CLEANUP] done
00:24:17.207  [Pipeline] }
00:24:17.220  [Pipeline] // catchError
00:24:17.230  [Pipeline] sh
00:24:17.515  + logger -p user.info -t JENKINS-CI
00:24:17.524  [Pipeline] }
00:24:17.537  [Pipeline] // stage
00:24:17.541  [Pipeline] }
00:24:17.568  [Pipeline] // node
00:24:17.572  [Pipeline] End of Pipeline
00:24:17.627  Finished: SUCCESS