00:00:00.000 Started by upstream project "autotest-per-patch" build number 132696 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:04.674 The recommended git tool is: git 00:00:04.675 using credential 00000000-0000-0000-0000-000000000002 00:00:04.677 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:04.687 Fetching changes from the remote Git repository 00:00:04.691 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:04.701 Using shallow fetch with depth 1 00:00:04.701 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.702 > git --version # timeout=10 00:00:04.712 > git --version # 'git version 2.39.2' 00:00:04.712 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:04.723 Setting http proxy: proxy-dmz.intel.com:911 00:00:04.723 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.485 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.499 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.515 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.515 > git config core.sparsecheckout # timeout=10 00:00:09.529 > git read-tree -mu HEAD # timeout=10 00:00:09.548 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.597 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.597 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.732 [Pipeline] Start of Pipeline 00:00:09.744 [Pipeline] library 00:00:09.745 Loading library shm_lib@master 00:00:09.745 Library shm_lib@master is cached. Copying from home. 00:00:09.789 [Pipeline] node 00:00:09.801 Running on WFP46 in /var/jenkins/workspace/nvme-phy-autotest 00:00:09.803 [Pipeline] { 00:00:09.811 [Pipeline] catchError 00:00:09.812 [Pipeline] { 00:00:09.821 [Pipeline] wrap 00:00:09.828 [Pipeline] { 00:00:09.836 [Pipeline] stage 00:00:09.838 [Pipeline] { (Prologue) 00:00:10.104 [Pipeline] sh 00:00:10.389 + logger -p user.info -t JENKINS-CI 00:00:10.404 [Pipeline] echo 00:00:10.405 Node: WFP46 00:00:10.410 [Pipeline] sh 00:00:10.704 [Pipeline] setCustomBuildProperty 00:00:10.715 [Pipeline] echo 00:00:10.716 Cleanup processes 00:00:10.720 [Pipeline] sh 00:00:11.001 + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:11.001 3736596 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:11.013 [Pipeline] sh 00:00:11.291 ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:11.291 ++ grep -v 'sudo pgrep' 00:00:11.291 ++ awk '{print $1}' 00:00:11.291 + sudo kill -9 00:00:11.291 + true 00:00:11.303 [Pipeline] cleanWs 00:00:11.311 [WS-CLEANUP] Deleting project workspace... 00:00:11.311 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.318 [WS-CLEANUP] done 00:00:11.322 [Pipeline] setCustomBuildProperty 00:00:11.335 [Pipeline] sh 00:00:11.619 + sudo git config --global --replace-all safe.directory '*' 00:00:11.688 [Pipeline] httpRequest 00:00:12.360 [Pipeline] echo 00:00:12.361 Sorcerer 10.211.164.20 is alive 00:00:12.369 [Pipeline] retry 00:00:12.371 [Pipeline] { 00:00:12.385 [Pipeline] httpRequest 00:00:12.389 HttpMethod: GET 00:00:12.389 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.389 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.395 Response Code: HTTP/1.1 200 OK 00:00:12.395 Success: Status code 200 is in the accepted range: 200,404 00:00:12.396 Saving response body to /var/jenkins/workspace/nvme-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.033 [Pipeline] } 00:00:35.045 [Pipeline] // retry 00:00:35.051 [Pipeline] sh 00:00:35.331 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.346 [Pipeline] httpRequest 00:00:35.650 [Pipeline] echo 00:00:35.652 Sorcerer 10.211.164.20 is alive 00:00:35.661 [Pipeline] retry 00:00:35.663 [Pipeline] { 00:00:35.676 [Pipeline] httpRequest 00:00:35.681 HttpMethod: GET 00:00:35.681 URL: http://10.211.164.20/packages/spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:00:35.682 Sending request to url: http://10.211.164.20/packages/spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:00:35.688 Response Code: HTTP/1.1 200 OK 00:00:35.689 Success: Status code 200 is in the accepted range: 200,404 00:00:35.689 Saving response body to /var/jenkins/workspace/nvme-phy-autotest/spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:06:42.593 [Pipeline] } 00:06:42.611 [Pipeline] // retry 00:06:42.619 [Pipeline] sh 00:06:42.911 + tar --no-same-owner -xf spdk_62083ef48221875f88ff616a9e98818f7374ebf3.tar.gz 00:06:45.486 [Pipeline] sh 00:06:45.772 + git -C spdk log --oneline -n5 00:06:45.772 62083ef48 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:06:45.772 289f56464 lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:06:45.772 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:06:45.772 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:06:45.772 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:06:45.782 [Pipeline] } 00:06:45.795 [Pipeline] // stage 00:06:45.804 [Pipeline] stage 00:06:45.807 [Pipeline] { (Prepare) 00:06:45.823 [Pipeline] writeFile 00:06:45.838 [Pipeline] sh 00:06:46.122 + logger -p user.info -t JENKINS-CI 00:06:46.135 [Pipeline] sh 00:06:46.419 + logger -p user.info -t JENKINS-CI 00:06:46.434 [Pipeline] sh 00:06:46.723 + cat autorun-spdk.conf 00:06:46.723 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:46.723 SPDK_TEST_IOAT=1 00:06:46.723 SPDK_TEST_NVME=1 00:06:46.723 SPDK_TEST_NVME_CLI=1 00:06:46.723 SPDK_TEST_OCF=1 00:06:46.723 SPDK_RUN_UBSAN=1 00:06:46.723 SPDK_TEST_NVME_CUSE=1 00:06:46.723 SPDK_TEST_SCHEDULER=1 00:06:46.723 SPDK_TEST_ACCEL=1 00:06:46.723 SPDK_TEST_NVME_INTERRUPT=1 00:06:46.730 RUN_NIGHTLY=0 00:06:46.734 [Pipeline] readFile 00:06:46.758 [Pipeline] withEnv 00:06:46.760 [Pipeline] { 00:06:46.772 [Pipeline] sh 00:06:47.064 + set -ex 00:06:47.064 + [[ -f /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf ]] 00:06:47.064 + source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:06:47.064 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:47.064 ++ SPDK_TEST_IOAT=1 00:06:47.064 ++ SPDK_TEST_NVME=1 00:06:47.064 ++ SPDK_TEST_NVME_CLI=1 00:06:47.064 ++ SPDK_TEST_OCF=1 00:06:47.064 ++ SPDK_RUN_UBSAN=1 00:06:47.064 ++ SPDK_TEST_NVME_CUSE=1 00:06:47.064 ++ SPDK_TEST_SCHEDULER=1 00:06:47.064 ++ SPDK_TEST_ACCEL=1 00:06:47.064 ++ SPDK_TEST_NVME_INTERRUPT=1 00:06:47.064 ++ RUN_NIGHTLY=0 00:06:47.064 + case $SPDK_TEST_NVMF_NICS in 00:06:47.064 + DRIVERS= 00:06:47.064 + [[ -n '' ]] 00:06:47.064 + exit 0 00:06:47.077 [Pipeline] } 00:06:47.099 [Pipeline] // withEnv 00:06:47.104 [Pipeline] } 00:06:47.118 [Pipeline] // stage 00:06:47.127 [Pipeline] catchError 00:06:47.129 [Pipeline] { 00:06:47.170 [Pipeline] timeout 00:06:47.170 Timeout set to expire in 40 min 00:06:47.172 [Pipeline] { 00:06:47.193 [Pipeline] stage 00:06:47.195 [Pipeline] { (Tests) 00:06:47.208 [Pipeline] sh 00:06:47.490 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvme-phy-autotest 00:06:47.491 ++ readlink -f /var/jenkins/workspace/nvme-phy-autotest 00:06:47.491 + DIR_ROOT=/var/jenkins/workspace/nvme-phy-autotest 00:06:47.491 + [[ -n /var/jenkins/workspace/nvme-phy-autotest ]] 00:06:47.491 + DIR_SPDK=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:06:47.491 + DIR_OUTPUT=/var/jenkins/workspace/nvme-phy-autotest/output 00:06:47.491 + [[ -d /var/jenkins/workspace/nvme-phy-autotest/spdk ]] 00:06:47.491 + [[ ! -d /var/jenkins/workspace/nvme-phy-autotest/output ]] 00:06:47.491 + mkdir -p /var/jenkins/workspace/nvme-phy-autotest/output 00:06:47.491 + [[ -d /var/jenkins/workspace/nvme-phy-autotest/output ]] 00:06:47.491 + [[ nvme-phy-autotest == pkgdep-* ]] 00:06:47.491 + cd /var/jenkins/workspace/nvme-phy-autotest 00:06:47.491 + source /etc/os-release 00:06:47.491 ++ NAME='Fedora Linux' 00:06:47.491 ++ VERSION='39 (Cloud Edition)' 00:06:47.491 ++ ID=fedora 00:06:47.491 ++ VERSION_ID=39 00:06:47.491 ++ VERSION_CODENAME= 00:06:47.491 ++ PLATFORM_ID=platform:f39 00:06:47.491 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:47.491 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:47.491 ++ LOGO=fedora-logo-icon 00:06:47.491 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:47.491 ++ HOME_URL=https://fedoraproject.org/ 00:06:47.491 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:47.491 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:47.491 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:47.491 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:47.491 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:47.491 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:47.491 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:47.491 ++ SUPPORT_END=2024-11-12 00:06:47.491 ++ VARIANT='Cloud Edition' 00:06:47.491 ++ VARIANT_ID=cloud 00:06:47.491 + uname -a 00:06:47.491 Linux spdk-wfp-46 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:06:47.491 + sudo /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:06:50.025 Hugepages 00:06:50.025 node hugesize free / total 00:06:50.025 node0 1048576kB 0 / 0 00:06:50.025 node0 2048kB 0 / 0 00:06:50.025 node1 1048576kB 0 / 0 00:06:50.025 node1 2048kB 0 / 0 00:06:50.025 00:06:50.025 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:50.025 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:50.025 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:50.025 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:50.025 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:50.025 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:50.025 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:50.025 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:50.284 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:50.284 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:50.284 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:50.284 + rm -f /tmp/spdk-ld-path 00:06:50.284 + source autorun-spdk.conf 00:06:50.284 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:50.284 ++ SPDK_TEST_IOAT=1 00:06:50.284 ++ SPDK_TEST_NVME=1 00:06:50.284 ++ SPDK_TEST_NVME_CLI=1 00:06:50.284 ++ SPDK_TEST_OCF=1 00:06:50.284 ++ SPDK_RUN_UBSAN=1 00:06:50.284 ++ SPDK_TEST_NVME_CUSE=1 00:06:50.284 ++ SPDK_TEST_SCHEDULER=1 00:06:50.284 ++ SPDK_TEST_ACCEL=1 00:06:50.284 ++ SPDK_TEST_NVME_INTERRUPT=1 00:06:50.284 ++ RUN_NIGHTLY=0 00:06:50.284 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:50.284 + [[ -n '' ]] 00:06:50.284 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvme-phy-autotest/spdk 00:06:50.284 + for M in /var/spdk/build-*-manifest.txt 00:06:50.284 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:50.284 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:06:50.284 + for M in /var/spdk/build-*-manifest.txt 00:06:50.284 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:50.284 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:06:50.284 + for M in /var/spdk/build-*-manifest.txt 00:06:50.284 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:50.284 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:06:50.284 ++ uname 00:06:50.284 + [[ Linux == \L\i\n\u\x ]] 00:06:50.284 + sudo dmesg -T 00:06:50.284 + sudo dmesg --clear 00:06:50.284 + dmesg_pid=3738565 00:06:50.284 + [[ Fedora Linux == FreeBSD ]] 00:06:50.284 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:50.284 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:50.284 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:50.284 + [[ -x /usr/src/fio-static/fio ]] 00:06:50.284 + export FIO_BIN=/usr/src/fio-static/fio 00:06:50.284 + FIO_BIN=/usr/src/fio-static/fio 00:06:50.284 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\e\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:50.284 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:50.284 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:50.284 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:50.284 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:50.284 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:50.284 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:50.284 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:50.284 + spdk/autorun.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:06:50.284 + sudo dmesg -Tw 00:06:50.543 13:38:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:50.543 13:38:21 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_IOAT=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVME_CLI=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_OCF=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_TEST_NVME_CUSE=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@8 -- $ SPDK_TEST_SCHEDULER=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_ACCEL=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_TEST_NVME_INTERRUPT=1 00:06:50.543 13:38:21 -- nvme-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=0 00:06:50.543 13:38:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:50.543 13:38:21 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:06:50.543 13:38:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:50.543 13:38:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:06:50.543 13:38:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:50.543 13:38:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:50.543 13:38:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.543 13:38:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.543 13:38:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.543 13:38:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.543 13:38:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.543 13:38:21 -- paths/export.sh@5 -- $ export PATH 00:06:50.543 13:38:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.543 13:38:21 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:06:50.543 13:38:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:50.543 13:38:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733402301.XXXXXX 00:06:50.543 13:38:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733402301.UUS0mL 00:06:50.543 13:38:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:50.543 13:38:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:50.543 13:38:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/' 00:06:50.543 13:38:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:50.543 13:38:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:50.543 13:38:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:50.543 13:38:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:50.543 13:38:21 -- common/autotest_common.sh@10 -- $ set +x 00:06:50.543 13:38:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk' 00:06:50.543 13:38:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:50.543 13:38:21 -- pm/common@17 -- $ local monitor 00:06:50.543 13:38:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:50.543 13:38:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:50.543 13:38:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:50.543 13:38:21 -- pm/common@21 -- $ date +%s 00:06:50.543 13:38:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:50.543 13:38:21 -- pm/common@21 -- $ date +%s 00:06:50.543 13:38:21 -- pm/common@25 -- $ sleep 1 00:06:50.543 13:38:22 -- pm/common@21 -- $ date +%s 00:06:50.543 13:38:22 -- pm/common@21 -- $ date +%s 00:06:50.543 13:38:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402302 00:06:50.543 13:38:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402302 00:06:50.543 13:38:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402302 00:06:50.543 13:38:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733402302 00:06:50.543 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402302_collect-cpu-load.pm.log 00:06:50.543 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402302_collect-vmstat.pm.log 00:06:50.543 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402302_collect-cpu-temp.pm.log 00:06:50.819 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733402302_collect-bmc-pm.bmc.pm.log 00:06:51.757 13:38:23 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:51.757 13:38:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:51.757 13:38:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:51.757 13:38:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:06:51.757 13:38:23 -- spdk/autobuild.sh@16 -- $ date -u 00:06:51.757 Thu Dec 5 12:38:23 PM UTC 2024 00:06:51.757 13:38:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:51.757 v25.01-pre-298-g62083ef48 00:06:51.757 13:38:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:51.757 13:38:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:51.757 13:38:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:51.757 13:38:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:51.757 13:38:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:51.757 13:38:23 -- common/autotest_common.sh@10 -- $ set +x 00:06:51.757 ************************************ 00:06:51.757 START TEST ubsan 00:06:51.757 ************************************ 00:06:51.757 13:38:23 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:51.757 using ubsan 00:06:51.757 00:06:51.757 real 0m0.001s 00:06:51.757 user 0m0.000s 00:06:51.757 sys 0m0.000s 00:06:51.758 13:38:23 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:51.758 13:38:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:51.758 ************************************ 00:06:51.758 END TEST ubsan 00:06:51.758 ************************************ 00:06:51.758 13:38:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:51.758 13:38:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:51.758 13:38:23 -- spdk/autobuild.sh@47 -- $ [[ 1 -eq 1 ]] 00:06:51.758 13:38:23 -- spdk/autobuild.sh@48 -- $ ocf_precompile 00:06:51.758 13:38:23 -- common/autobuild_common.sh@441 -- $ run_test autobuild_ocf_precompile _ocf_precompile 00:06:51.758 13:38:23 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:06:51.758 13:38:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:51.758 13:38:23 -- common/autotest_common.sh@10 -- $ set +x 00:06:51.758 ************************************ 00:06:51.758 START TEST autobuild_ocf_precompile 00:06:51.758 ************************************ 00:06:51.758 13:38:23 autobuild_ocf_precompile -- common/autotest_common.sh@1129 -- $ _ocf_precompile 00:06:51.758 13:38:23 autobuild_ocf_precompile -- common/autobuild_common.sh@21 -- $ echo --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk 00:06:51.758 13:38:23 autobuild_ocf_precompile -- common/autobuild_common.sh@21 -- $ sed s/--enable-coverage//g 00:06:51.758 13:38:23 autobuild_ocf_precompile -- common/autobuild_common.sh@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --with-ublk 00:06:51.758 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:06:51.758 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:06:52.327 Using 'verbs' RDMA provider 00:07:08.156 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:20.360 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:20.360 Creating mk/config.mk...done. 00:07:20.360 Creating mk/cc.flags.mk...done. 00:07:20.360 Type 'make' to build. 00:07:20.360 13:38:51 autobuild_ocf_precompile -- common/autobuild_common.sh@22 -- $ make -j72 include/spdk/config.h 00:07:20.360 13:38:51 autobuild_ocf_precompile -- common/autobuild_common.sh@23 -- $ CC=gcc 00:07:20.360 13:38:51 autobuild_ocf_precompile -- common/autobuild_common.sh@23 -- $ CCAR=ar 00:07:20.360 13:38:51 autobuild_ocf_precompile -- common/autobuild_common.sh@23 -- $ make -j72 -C /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf exportlib O=/var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:07:20.360 make: Entering directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf' 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_queue.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_ctx.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_metadata.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_core.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/promotion/nhit.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_composite_volume.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_debug.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_mngt.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/alru.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/acp.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_err.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io_class.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_stats.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_types.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cleaner.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cache.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_def.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_logger.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_volume.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io.h 00:07:20.620 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cfg.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io_allocator.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_stats.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_structs.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/ops.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_builder.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_misc.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_cache.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_io_class.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_flush.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_metadata.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core_priv.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_status.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_internal.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_bit.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment_id.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_common.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_structs.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.h 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.c 00:07:20.881 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.c 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cache_line.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.c 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.c 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.c 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru_structs.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_priv.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.c 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.c 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_ops.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop_structs.h 00:07:20.882 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp_structs.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache_priv.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_flush.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_io.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_io.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_debug.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_flush.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx_priv.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_priv.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_priv.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_class.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_def_priv.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_part.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.c 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru_structs.h 00:07:21.142 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume_priv.h 00:07:21.406 CC env_ocf/mpool.o 00:07:21.406 CC env_ocf/ocf_env.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_pipeline.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_alock.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_async_lock.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_cache_line.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_realloc.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_rbtree.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_generator.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_user_part.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_parallelize.o 00:07:21.406 CC env_ocf/src/ocf/utils/utils_list.o 00:07:21.407 CC env_ocf/src/ocf/utils/utils_cleaner.o 00:07:21.407 CC env_ocf/src/ocf/utils/utils_refcnt.o 00:07:21.407 CC env_ocf/src/ocf/utils/utils_io.o 00:07:21.407 CC env_ocf/src/ocf/utils/utils_request.o 00:07:21.407 CC env_ocf/src/ocf/promotion/nhit/nhit_hash.o 00:07:21.407 CC env_ocf/src/ocf/ocf_volume.o 00:07:21.407 CC env_ocf/src/ocf/promotion/nhit/nhit.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_cache.o 00:07:21.407 CC env_ocf/src/ocf/promotion/promotion.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_misc.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_core_pool.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_common.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_core.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_io_class.o 00:07:21.407 CC env_ocf/src/ocf/mngt/ocf_mngt_flush.o 00:07:21.407 CC env_ocf/src/ocf/ocf_logger.o 00:07:21.407 CC env_ocf/src/ocf/ocf_queue.o 00:07:21.407 CC env_ocf/src/ocf/ocf_stats_builder.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata.o 00:07:21.407 CC env_ocf/src/ocf/ocf_metadata.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_raw.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_collision.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_eviction_policy.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_raw_dynamic.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_segment.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_partition.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_misc.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_superblock.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_raw_atomic.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_raw_volatile.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_cleaning_policy.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_io.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_passive_update.o 00:07:21.407 CC env_ocf/src/ocf/metadata/metadata_core.o 00:07:21.407 CC env_ocf/src/ocf/cleaning/nop.o 00:07:21.407 CC env_ocf/src/ocf/cleaning/alru.o 00:07:21.407 CC env_ocf/src/ocf/cleaning/acp.o 00:07:21.407 CC env_ocf/src/ocf/cleaning/cleaning.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_bf.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_fast.o 00:07:21.407 CC env_ocf/src/ocf/ocf_seq_cutoff.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_inv.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_flush.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_wo.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_discard.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_wi.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_io.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_zero.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_wt.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_common.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_wa.o 00:07:21.407 CC env_ocf/src/ocf/engine/cache_engine.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_d2c.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_rd.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_pt.o 00:07:21.407 CC env_ocf/src/ocf/engine/engine_wb.o 00:07:21.407 CC env_ocf/src/ocf/ocf_io.o 00:07:21.407 CC env_ocf/src/ocf/ocf_ctx.o 00:07:21.407 CC env_ocf/src/ocf/ocf_core.o 00:07:21.407 CC env_ocf/src/ocf/ocf_stats.o 00:07:21.665 CC env_ocf/src/ocf/ocf_lru.o 00:07:21.665 CC env_ocf/src/ocf/ocf_io_class.o 00:07:21.924 CC env_ocf/src/ocf/ocf_space.o 00:07:21.924 CC env_ocf/src/ocf/concurrency/ocf_mio_concurrency.o 00:07:21.924 CC env_ocf/src/ocf/concurrency/ocf_pio_concurrency.o 00:07:21.924 CC env_ocf/src/ocf/concurrency/ocf_concurrency.o 00:07:21.924 CC env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.o 00:07:21.924 CC env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.o 00:07:21.924 CC env_ocf/src/ocf/ocf_composite_volume.o 00:07:21.924 CC env_ocf/src/ocf/ocf_cache.o 00:07:21.924 CC env_ocf/src/ocf/ocf_request.o 00:07:22.861 LIB libspdk_ocfenv.a 00:07:23.156 cp /var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a /var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:07:23.156 make: Leaving directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf' 00:07:23.156 13:38:54 autobuild_ocf_precompile -- common/autobuild_common.sh@25 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a' 00:07:23.156 13:38:54 autobuild_ocf_precompile -- common/autobuild_common.sh@27 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:07:23.156 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:07:23.156 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:07:23.441 Using 'verbs' RDMA provider 00:07:36.604 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:46.596 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:46.855 Creating mk/config.mk...done. 00:07:46.855 Creating mk/cc.flags.mk...done. 00:07:46.855 Type 'make' to build. 00:07:46.855 00:07:46.855 real 0m55.057s 00:07:46.855 user 0m53.396s 00:07:46.855 sys 0m42.694s 00:07:46.855 13:39:18 autobuild_ocf_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:46.855 13:39:18 autobuild_ocf_precompile -- common/autotest_common.sh@10 -- $ set +x 00:07:46.855 ************************************ 00:07:46.855 END TEST autobuild_ocf_precompile 00:07:46.855 ************************************ 00:07:46.855 13:39:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:46.855 13:39:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:46.855 13:39:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:46.855 13:39:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:46.855 13:39:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:46.855 13:39:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a --with-shared 00:07:46.855 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:07:46.855 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:07:47.423 Using 'verbs' RDMA provider 00:08:00.574 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal.log)...done. 00:08:12.782 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:08:12.783 Creating mk/config.mk...done. 00:08:12.783 Creating mk/cc.flags.mk...done. 00:08:12.783 Type 'make' to build. 00:08:12.783 13:39:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:08:12.783 13:39:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:12.783 13:39:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:12.783 13:39:43 -- common/autotest_common.sh@10 -- $ set +x 00:08:12.783 ************************************ 00:08:12.783 START TEST make 00:08:12.783 ************************************ 00:08:12.783 13:39:43 make -- common/autotest_common.sh@1129 -- $ make -j72 00:08:12.783 make[1]: Nothing to be done for 'all'. 00:08:22.781 The Meson build system 00:08:22.781 Version: 1.5.0 00:08:22.781 Source dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk 00:08:22.781 Build dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp 00:08:22.781 Build type: native build 00:08:22.781 Program cat found: YES (/usr/bin/cat) 00:08:22.781 Project name: DPDK 00:08:22.781 Project version: 24.03.0 00:08:22.781 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:22.781 C linker for the host machine: cc ld.bfd 2.40-14 00:08:22.781 Host machine cpu family: x86_64 00:08:22.781 Host machine cpu: x86_64 00:08:22.781 Message: ## Building in Developer Mode ## 00:08:22.781 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:22.781 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:08:22.781 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:22.781 Program python3 found: YES (/usr/bin/python3) 00:08:22.781 Program cat found: YES (/usr/bin/cat) 00:08:22.781 Compiler for C supports arguments -march=native: YES 00:08:22.781 Checking for size of "void *" : 8 00:08:22.781 Checking for size of "void *" : 8 (cached) 00:08:22.781 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:22.781 Library m found: YES 00:08:22.781 Library numa found: YES 00:08:22.781 Has header "numaif.h" : YES 00:08:22.781 Library fdt found: NO 00:08:22.781 Library execinfo found: NO 00:08:22.781 Has header "execinfo.h" : YES 00:08:22.781 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:22.781 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:22.781 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:22.781 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:22.781 Run-time dependency openssl found: YES 3.1.1 00:08:22.781 Run-time dependency libpcap found: YES 1.10.4 00:08:22.781 Has header "pcap.h" with dependency libpcap: YES 00:08:22.781 Compiler for C supports arguments -Wcast-qual: YES 00:08:22.781 Compiler for C supports arguments -Wdeprecated: YES 00:08:22.781 Compiler for C supports arguments -Wformat: YES 00:08:22.781 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:22.781 Compiler for C supports arguments -Wformat-security: NO 00:08:22.781 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:22.781 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:22.781 Compiler for C supports arguments -Wnested-externs: YES 00:08:22.781 Compiler for C supports arguments -Wold-style-definition: YES 00:08:22.781 Compiler for C supports arguments -Wpointer-arith: YES 00:08:22.781 Compiler for C supports arguments -Wsign-compare: YES 00:08:22.781 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:22.781 Compiler for C supports arguments -Wundef: YES 00:08:22.781 Compiler for C supports arguments -Wwrite-strings: YES 00:08:22.781 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:22.781 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:22.781 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:22.781 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:22.781 Program objdump found: YES (/usr/bin/objdump) 00:08:22.781 Compiler for C supports arguments -mavx512f: YES 00:08:22.781 Checking if "AVX512 checking" compiles: YES 00:08:22.781 Fetching value of define "__SSE4_2__" : 1 00:08:22.781 Fetching value of define "__AES__" : 1 00:08:22.781 Fetching value of define "__AVX__" : 1 00:08:22.781 Fetching value of define "__AVX2__" : 1 00:08:22.781 Fetching value of define "__AVX512BW__" : 1 00:08:22.781 Fetching value of define "__AVX512CD__" : 1 00:08:22.781 Fetching value of define "__AVX512DQ__" : 1 00:08:22.781 Fetching value of define "__AVX512F__" : 1 00:08:22.781 Fetching value of define "__AVX512VL__" : 1 00:08:22.781 Fetching value of define "__PCLMUL__" : 1 00:08:22.781 Fetching value of define "__RDRND__" : 1 00:08:22.781 Fetching value of define "__RDSEED__" : 1 00:08:22.781 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:22.781 Fetching value of define "__znver1__" : (undefined) 00:08:22.781 Fetching value of define "__znver2__" : (undefined) 00:08:22.781 Fetching value of define "__znver3__" : (undefined) 00:08:22.781 Fetching value of define "__znver4__" : (undefined) 00:08:22.781 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:22.781 Message: lib/log: Defining dependency "log" 00:08:22.781 Message: lib/kvargs: Defining dependency "kvargs" 00:08:22.781 Message: lib/telemetry: Defining dependency "telemetry" 00:08:22.781 Checking for function "getentropy" : NO 00:08:22.781 Message: lib/eal: Defining dependency "eal" 00:08:22.781 Message: lib/ring: Defining dependency "ring" 00:08:22.781 Message: lib/rcu: Defining dependency "rcu" 00:08:22.781 Message: lib/mempool: Defining dependency "mempool" 00:08:22.781 Message: lib/mbuf: Defining dependency "mbuf" 00:08:22.781 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:22.781 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:22.781 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:22.781 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:22.781 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:22.781 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:08:22.781 Compiler for C supports arguments -mpclmul: YES 00:08:22.781 Compiler for C supports arguments -maes: YES 00:08:22.781 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:22.781 Compiler for C supports arguments -mavx512bw: YES 00:08:22.781 Compiler for C supports arguments -mavx512dq: YES 00:08:22.781 Compiler for C supports arguments -mavx512vl: YES 00:08:22.781 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:22.781 Compiler for C supports arguments -mavx2: YES 00:08:22.781 Compiler for C supports arguments -mavx: YES 00:08:22.781 Message: lib/net: Defining dependency "net" 00:08:22.781 Message: lib/meter: Defining dependency "meter" 00:08:22.781 Message: lib/ethdev: Defining dependency "ethdev" 00:08:22.781 Message: lib/pci: Defining dependency "pci" 00:08:22.781 Message: lib/cmdline: Defining dependency "cmdline" 00:08:22.781 Message: lib/hash: Defining dependency "hash" 00:08:22.781 Message: lib/timer: Defining dependency "timer" 00:08:22.781 Message: lib/compressdev: Defining dependency "compressdev" 00:08:22.781 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:22.781 Message: lib/dmadev: Defining dependency "dmadev" 00:08:22.781 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:22.781 Message: lib/power: Defining dependency "power" 00:08:22.781 Message: lib/reorder: Defining dependency "reorder" 00:08:22.781 Message: lib/security: Defining dependency "security" 00:08:22.781 Has header "linux/userfaultfd.h" : YES 00:08:22.781 Has header "linux/vduse.h" : YES 00:08:22.781 Message: lib/vhost: Defining dependency "vhost" 00:08:22.781 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:22.781 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:22.781 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:22.781 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:22.781 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:22.781 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:22.781 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:22.781 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:22.781 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:22.781 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:22.781 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:22.781 Configuring doxy-api-html.conf using configuration 00:08:22.781 Configuring doxy-api-man.conf using configuration 00:08:22.781 Program mandb found: YES (/usr/bin/mandb) 00:08:22.781 Program sphinx-build found: NO 00:08:22.781 Configuring rte_build_config.h using configuration 00:08:22.781 Message: 00:08:22.781 ================= 00:08:22.781 Applications Enabled 00:08:22.782 ================= 00:08:22.782 00:08:22.782 apps: 00:08:22.782 00:08:22.782 00:08:22.782 Message: 00:08:22.782 ================= 00:08:22.782 Libraries Enabled 00:08:22.782 ================= 00:08:22.782 00:08:22.782 libs: 00:08:22.782 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:22.782 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:22.782 cryptodev, dmadev, power, reorder, security, vhost, 00:08:22.782 00:08:22.782 Message: 00:08:22.782 =============== 00:08:22.782 Drivers Enabled 00:08:22.782 =============== 00:08:22.782 00:08:22.782 common: 00:08:22.782 00:08:22.782 bus: 00:08:22.782 pci, vdev, 00:08:22.782 mempool: 00:08:22.782 ring, 00:08:22.782 dma: 00:08:22.782 00:08:22.782 net: 00:08:22.782 00:08:22.782 crypto: 00:08:22.782 00:08:22.782 compress: 00:08:22.782 00:08:22.782 vdpa: 00:08:22.782 00:08:22.782 00:08:22.782 Message: 00:08:22.782 ================= 00:08:22.782 Content Skipped 00:08:22.782 ================= 00:08:22.782 00:08:22.782 apps: 00:08:22.782 dumpcap: explicitly disabled via build config 00:08:22.782 graph: explicitly disabled via build config 00:08:22.782 pdump: explicitly disabled via build config 00:08:22.782 proc-info: explicitly disabled via build config 00:08:22.782 test-acl: explicitly disabled via build config 00:08:22.782 test-bbdev: explicitly disabled via build config 00:08:22.782 test-cmdline: explicitly disabled via build config 00:08:22.782 test-compress-perf: explicitly disabled via build config 00:08:22.782 test-crypto-perf: explicitly disabled via build config 00:08:22.782 test-dma-perf: explicitly disabled via build config 00:08:22.782 test-eventdev: explicitly disabled via build config 00:08:22.782 test-fib: explicitly disabled via build config 00:08:22.782 test-flow-perf: explicitly disabled via build config 00:08:22.782 test-gpudev: explicitly disabled via build config 00:08:22.782 test-mldev: explicitly disabled via build config 00:08:22.782 test-pipeline: explicitly disabled via build config 00:08:22.782 test-pmd: explicitly disabled via build config 00:08:22.782 test-regex: explicitly disabled via build config 00:08:22.782 test-sad: explicitly disabled via build config 00:08:22.782 test-security-perf: explicitly disabled via build config 00:08:22.782 00:08:22.782 libs: 00:08:22.782 argparse: explicitly disabled via build config 00:08:22.782 metrics: explicitly disabled via build config 00:08:22.782 acl: explicitly disabled via build config 00:08:22.782 bbdev: explicitly disabled via build config 00:08:22.782 bitratestats: explicitly disabled via build config 00:08:22.782 bpf: explicitly disabled via build config 00:08:22.782 cfgfile: explicitly disabled via build config 00:08:22.782 distributor: explicitly disabled via build config 00:08:22.782 efd: explicitly disabled via build config 00:08:22.782 eventdev: explicitly disabled via build config 00:08:22.782 dispatcher: explicitly disabled via build config 00:08:22.782 gpudev: explicitly disabled via build config 00:08:22.782 gro: explicitly disabled via build config 00:08:22.782 gso: explicitly disabled via build config 00:08:22.782 ip_frag: explicitly disabled via build config 00:08:22.782 jobstats: explicitly disabled via build config 00:08:22.782 latencystats: explicitly disabled via build config 00:08:22.782 lpm: explicitly disabled via build config 00:08:22.782 member: explicitly disabled via build config 00:08:22.782 pcapng: explicitly disabled via build config 00:08:22.782 rawdev: explicitly disabled via build config 00:08:22.782 regexdev: explicitly disabled via build config 00:08:22.782 mldev: explicitly disabled via build config 00:08:22.782 rib: explicitly disabled via build config 00:08:22.782 sched: explicitly disabled via build config 00:08:22.782 stack: explicitly disabled via build config 00:08:22.782 ipsec: explicitly disabled via build config 00:08:22.782 pdcp: explicitly disabled via build config 00:08:22.782 fib: explicitly disabled via build config 00:08:22.782 port: explicitly disabled via build config 00:08:22.782 pdump: explicitly disabled via build config 00:08:22.782 table: explicitly disabled via build config 00:08:22.782 pipeline: explicitly disabled via build config 00:08:22.782 graph: explicitly disabled via build config 00:08:22.782 node: explicitly disabled via build config 00:08:22.782 00:08:22.782 drivers: 00:08:22.782 common/cpt: not in enabled drivers build config 00:08:22.782 common/dpaax: not in enabled drivers build config 00:08:22.782 common/iavf: not in enabled drivers build config 00:08:22.782 common/idpf: not in enabled drivers build config 00:08:22.782 common/ionic: not in enabled drivers build config 00:08:22.782 common/mvep: not in enabled drivers build config 00:08:22.782 common/octeontx: not in enabled drivers build config 00:08:22.782 bus/auxiliary: not in enabled drivers build config 00:08:22.782 bus/cdx: not in enabled drivers build config 00:08:22.782 bus/dpaa: not in enabled drivers build config 00:08:22.782 bus/fslmc: not in enabled drivers build config 00:08:22.782 bus/ifpga: not in enabled drivers build config 00:08:22.782 bus/platform: not in enabled drivers build config 00:08:22.782 bus/uacce: not in enabled drivers build config 00:08:22.782 bus/vmbus: not in enabled drivers build config 00:08:22.782 common/cnxk: not in enabled drivers build config 00:08:22.782 common/mlx5: not in enabled drivers build config 00:08:22.782 common/nfp: not in enabled drivers build config 00:08:22.782 common/nitrox: not in enabled drivers build config 00:08:22.782 common/qat: not in enabled drivers build config 00:08:22.782 common/sfc_efx: not in enabled drivers build config 00:08:22.782 mempool/bucket: not in enabled drivers build config 00:08:22.782 mempool/cnxk: not in enabled drivers build config 00:08:22.782 mempool/dpaa: not in enabled drivers build config 00:08:22.782 mempool/dpaa2: not in enabled drivers build config 00:08:22.782 mempool/octeontx: not in enabled drivers build config 00:08:22.782 mempool/stack: not in enabled drivers build config 00:08:22.782 dma/cnxk: not in enabled drivers build config 00:08:22.782 dma/dpaa: not in enabled drivers build config 00:08:22.782 dma/dpaa2: not in enabled drivers build config 00:08:22.782 dma/hisilicon: not in enabled drivers build config 00:08:22.782 dma/idxd: not in enabled drivers build config 00:08:22.782 dma/ioat: not in enabled drivers build config 00:08:22.782 dma/skeleton: not in enabled drivers build config 00:08:22.782 net/af_packet: not in enabled drivers build config 00:08:22.782 net/af_xdp: not in enabled drivers build config 00:08:22.782 net/ark: not in enabled drivers build config 00:08:22.782 net/atlantic: not in enabled drivers build config 00:08:22.782 net/avp: not in enabled drivers build config 00:08:22.782 net/axgbe: not in enabled drivers build config 00:08:22.782 net/bnx2x: not in enabled drivers build config 00:08:22.782 net/bnxt: not in enabled drivers build config 00:08:22.782 net/bonding: not in enabled drivers build config 00:08:22.782 net/cnxk: not in enabled drivers build config 00:08:22.782 net/cpfl: not in enabled drivers build config 00:08:22.782 net/cxgbe: not in enabled drivers build config 00:08:22.782 net/dpaa: not in enabled drivers build config 00:08:22.782 net/dpaa2: not in enabled drivers build config 00:08:22.782 net/e1000: not in enabled drivers build config 00:08:22.782 net/ena: not in enabled drivers build config 00:08:22.782 net/enetc: not in enabled drivers build config 00:08:22.782 net/enetfec: not in enabled drivers build config 00:08:22.782 net/enic: not in enabled drivers build config 00:08:22.782 net/failsafe: not in enabled drivers build config 00:08:22.782 net/fm10k: not in enabled drivers build config 00:08:22.782 net/gve: not in enabled drivers build config 00:08:22.782 net/hinic: not in enabled drivers build config 00:08:22.782 net/hns3: not in enabled drivers build config 00:08:22.782 net/i40e: not in enabled drivers build config 00:08:22.782 net/iavf: not in enabled drivers build config 00:08:22.782 net/ice: not in enabled drivers build config 00:08:22.782 net/idpf: not in enabled drivers build config 00:08:22.782 net/igc: not in enabled drivers build config 00:08:22.782 net/ionic: not in enabled drivers build config 00:08:22.782 net/ipn3ke: not in enabled drivers build config 00:08:22.782 net/ixgbe: not in enabled drivers build config 00:08:22.782 net/mana: not in enabled drivers build config 00:08:22.782 net/memif: not in enabled drivers build config 00:08:22.782 net/mlx4: not in enabled drivers build config 00:08:22.782 net/mlx5: not in enabled drivers build config 00:08:22.782 net/mvneta: not in enabled drivers build config 00:08:22.782 net/mvpp2: not in enabled drivers build config 00:08:22.782 net/netvsc: not in enabled drivers build config 00:08:22.782 net/nfb: not in enabled drivers build config 00:08:22.782 net/nfp: not in enabled drivers build config 00:08:22.782 net/ngbe: not in enabled drivers build config 00:08:22.782 net/null: not in enabled drivers build config 00:08:22.782 net/octeontx: not in enabled drivers build config 00:08:22.782 net/octeon_ep: not in enabled drivers build config 00:08:22.782 net/pcap: not in enabled drivers build config 00:08:22.782 net/pfe: not in enabled drivers build config 00:08:22.782 net/qede: not in enabled drivers build config 00:08:22.782 net/ring: not in enabled drivers build config 00:08:22.782 net/sfc: not in enabled drivers build config 00:08:22.782 net/softnic: not in enabled drivers build config 00:08:22.782 net/tap: not in enabled drivers build config 00:08:22.782 net/thunderx: not in enabled drivers build config 00:08:22.782 net/txgbe: not in enabled drivers build config 00:08:22.782 net/vdev_netvsc: not in enabled drivers build config 00:08:22.782 net/vhost: not in enabled drivers build config 00:08:22.782 net/virtio: not in enabled drivers build config 00:08:22.782 net/vmxnet3: not in enabled drivers build config 00:08:22.782 raw/*: missing internal dependency, "rawdev" 00:08:22.782 crypto/armv8: not in enabled drivers build config 00:08:22.782 crypto/bcmfs: not in enabled drivers build config 00:08:22.782 crypto/caam_jr: not in enabled drivers build config 00:08:22.782 crypto/ccp: not in enabled drivers build config 00:08:22.782 crypto/cnxk: not in enabled drivers build config 00:08:22.782 crypto/dpaa_sec: not in enabled drivers build config 00:08:22.782 crypto/dpaa2_sec: not in enabled drivers build config 00:08:22.782 crypto/ipsec_mb: not in enabled drivers build config 00:08:22.782 crypto/mlx5: not in enabled drivers build config 00:08:22.782 crypto/mvsam: not in enabled drivers build config 00:08:22.782 crypto/nitrox: not in enabled drivers build config 00:08:22.783 crypto/null: not in enabled drivers build config 00:08:22.783 crypto/octeontx: not in enabled drivers build config 00:08:22.783 crypto/openssl: not in enabled drivers build config 00:08:22.783 crypto/scheduler: not in enabled drivers build config 00:08:22.783 crypto/uadk: not in enabled drivers build config 00:08:22.783 crypto/virtio: not in enabled drivers build config 00:08:22.783 compress/isal: not in enabled drivers build config 00:08:22.783 compress/mlx5: not in enabled drivers build config 00:08:22.783 compress/nitrox: not in enabled drivers build config 00:08:22.783 compress/octeontx: not in enabled drivers build config 00:08:22.783 compress/zlib: not in enabled drivers build config 00:08:22.783 regex/*: missing internal dependency, "regexdev" 00:08:22.783 ml/*: missing internal dependency, "mldev" 00:08:22.783 vdpa/ifc: not in enabled drivers build config 00:08:22.783 vdpa/mlx5: not in enabled drivers build config 00:08:22.783 vdpa/nfp: not in enabled drivers build config 00:08:22.783 vdpa/sfc: not in enabled drivers build config 00:08:22.783 event/*: missing internal dependency, "eventdev" 00:08:22.783 baseband/*: missing internal dependency, "bbdev" 00:08:22.783 gpu/*: missing internal dependency, "gpudev" 00:08:22.783 00:08:22.783 00:08:22.783 Build targets in project: 85 00:08:22.783 00:08:22.783 DPDK 24.03.0 00:08:22.783 00:08:22.783 User defined options 00:08:22.783 buildtype : debug 00:08:22.783 default_library : shared 00:08:22.783 libdir : lib 00:08:22.783 prefix : /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:08:22.783 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:22.783 c_link_args : 00:08:22.783 cpu_instruction_set: native 00:08:22.783 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:08:22.783 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:08:22.783 enable_docs : false 00:08:22.783 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:08:22.783 enable_kmods : false 00:08:22.783 max_lcores : 128 00:08:22.783 tests : false 00:08:22.783 00:08:22.783 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:22.783 ninja: Entering directory `/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp' 00:08:22.783 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:22.783 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:22.783 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:22.783 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:22.783 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:22.783 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:22.783 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:22.783 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:22.783 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:22.783 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:22.783 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:22.783 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:22.783 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:22.783 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:22.783 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:22.783 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:22.783 [17/268] Linking static target lib/librte_kvargs.a 00:08:22.783 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:22.783 [19/268] Linking static target lib/librte_log.a 00:08:23.045 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:23.045 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:23.045 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:23.045 [23/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:23.045 [24/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:23.045 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:23.045 [26/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:23.045 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:23.045 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:23.045 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:23.045 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:23.045 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:23.045 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:23.045 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:23.045 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:23.045 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:23.045 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:23.045 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:23.045 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:23.045 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:23.045 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:23.045 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:23.045 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:23.045 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:23.045 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:23.045 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:23.045 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:23.045 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:23.045 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:23.045 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:23.045 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:23.045 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:23.045 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:23.045 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:23.045 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:23.045 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:23.045 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:23.045 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:23.045 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:23.308 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:23.308 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:23.308 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:23.308 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:23.308 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:23.308 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:23.308 [65/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:23.308 [66/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:23.308 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:23.308 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:23.308 [69/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:23.308 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:23.308 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:23.308 [72/268] Linking static target lib/librte_pci.a 00:08:23.308 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:23.308 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:23.308 [75/268] Linking static target lib/librte_rcu.a 00:08:23.308 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:23.308 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:23.308 [78/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.308 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:23.308 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:23.308 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:23.308 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:23.308 [83/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:23.308 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:23.308 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:23.308 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:23.308 [87/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:23.308 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:23.308 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:23.308 [90/268] Linking static target lib/librte_telemetry.a 00:08:23.308 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:23.308 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:23.308 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:23.308 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:23.308 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:23.308 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:23.308 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:23.308 [98/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:23.308 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:23.308 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:23.308 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:23.308 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:23.308 [103/268] Linking static target lib/librte_ring.a 00:08:23.308 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:23.308 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:23.308 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:23.566 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:23.566 [108/268] Linking static target lib/librte_mempool.a 00:08:23.566 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:23.566 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:23.566 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:23.566 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:23.566 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:23.566 [114/268] Linking static target lib/librte_meter.a 00:08:23.566 [115/268] Linking static target lib/librte_eal.a 00:08:23.566 [116/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:23.566 [117/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:23.566 [118/268] Linking static target lib/librte_net.a 00:08:23.566 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:23.567 [120/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.567 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:23.567 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:23.823 [123/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.823 [124/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:23.823 [125/268] Linking static target lib/librte_timer.a 00:08:23.823 [126/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:23.823 [127/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.823 [128/268] Linking target lib/librte_log.so.24.1 00:08:23.823 [129/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:23.824 [130/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:23.824 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:23.824 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:23.824 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:23.824 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:23.824 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:23.824 [136/268] Linking static target lib/librte_cmdline.a 00:08:23.824 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:23.824 [138/268] Linking static target lib/librte_mbuf.a 00:08:23.824 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:23.824 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:23.824 [141/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:23.824 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:23.824 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:23.824 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.824 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:23.824 [146/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.824 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:23.824 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:23.824 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:23.824 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:23.824 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:23.824 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:23.824 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:23.824 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:23.824 [155/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:23.824 [156/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:23.824 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:24.081 [158/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:24.081 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:24.081 [160/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.081 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.081 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:24.081 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:24.081 [164/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:24.081 [165/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:24.081 [166/268] Linking static target lib/librte_dmadev.a 00:08:24.081 [167/268] Linking static target lib/librte_reorder.a 00:08:24.081 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:24.081 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:24.081 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:24.081 [171/268] Linking static target lib/librte_compressdev.a 00:08:24.081 [172/268] Linking target lib/librte_kvargs.so.24.1 00:08:24.081 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:24.081 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:24.081 [175/268] Linking target lib/librte_telemetry.so.24.1 00:08:24.081 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:24.081 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:24.081 [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:24.081 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:24.081 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:24.081 [181/268] Linking static target lib/librte_power.a 00:08:24.081 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:24.081 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:24.081 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:24.081 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:24.081 [186/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:24.081 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:24.081 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:24.081 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:24.081 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:24.081 [191/268] Linking static target lib/librte_security.a 00:08:24.081 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:24.081 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:24.081 [194/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.339 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:24.339 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:24.339 [197/268] Linking static target lib/librte_hash.a 00:08:24.339 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:24.339 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:24.339 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:24.339 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:24.339 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:24.339 [203/268] Linking static target drivers/librte_bus_vdev.a 00:08:24.339 [204/268] Linking static target drivers/librte_mempool_ring.a 00:08:24.339 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:24.339 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:24.339 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.339 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:24.339 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:24.339 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:24.339 [211/268] Linking static target drivers/librte_bus_pci.a 00:08:24.339 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.598 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:24.598 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:24.598 [215/268] Linking static target lib/librte_cryptodev.a 00:08:24.598 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.598 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.598 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.856 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.856 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.114 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.114 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:25.114 [223/268] Linking static target lib/librte_ethdev.a 00:08:25.114 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:25.114 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.114 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.372 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.745 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:26.745 [229/268] Linking static target lib/librte_vhost.a 00:08:26.745 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.644 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.200 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.458 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.716 [234/268] Linking target lib/librte_eal.so.24.1 00:08:35.716 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:35.716 [236/268] Linking target lib/librte_ring.so.24.1 00:08:35.716 [237/268] Linking target lib/librte_meter.so.24.1 00:08:35.716 [238/268] Linking target lib/librte_pci.so.24.1 00:08:35.716 [239/268] Linking target lib/librte_timer.so.24.1 00:08:35.716 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:35.716 [241/268] Linking target lib/librte_dmadev.so.24.1 00:08:35.974 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:35.974 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:35.974 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:35.974 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:35.974 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:35.974 [247/268] Linking target lib/librte_mempool.so.24.1 00:08:35.974 [248/268] Linking target lib/librte_rcu.so.24.1 00:08:35.974 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:36.232 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:36.232 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:36.232 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:36.232 [253/268] Linking target lib/librte_mbuf.so.24.1 00:08:36.232 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:36.489 [255/268] Linking target lib/librte_compressdev.so.24.1 00:08:36.489 [256/268] Linking target lib/librte_reorder.so.24.1 00:08:36.489 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:08:36.489 [258/268] Linking target lib/librte_net.so.24.1 00:08:36.489 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:36.489 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:36.746 [261/268] Linking target lib/librte_hash.so.24.1 00:08:36.747 [262/268] Linking target lib/librte_cmdline.so.24.1 00:08:36.747 [263/268] Linking target lib/librte_security.so.24.1 00:08:36.747 [264/268] Linking target lib/librte_ethdev.so.24.1 00:08:36.747 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:36.747 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:36.747 [267/268] Linking target lib/librte_power.so.24.1 00:08:36.747 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:37.004 INFO: autodetecting backend as ninja 00:08:37.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp -j 72 00:08:55.073 CC lib/log/log_flags.o 00:08:55.073 CC lib/log/log.o 00:08:55.073 CC lib/log/log_deprecated.o 00:08:55.073 CC lib/ut/ut.o 00:08:55.073 make[3]: '/var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a' is up to date. 00:08:55.073 CC lib/ut_mock/mock.o 00:08:55.073 LIB libspdk_ut.a 00:08:55.073 LIB libspdk_log.a 00:08:55.073 SO libspdk_ut.so.2.0 00:08:55.073 SO libspdk_log.so.7.1 00:08:55.073 LIB libspdk_ut_mock.a 00:08:55.073 SYMLINK libspdk_ut.so 00:08:55.073 SO libspdk_ut_mock.so.6.0 00:08:55.073 SYMLINK libspdk_log.so 00:08:55.331 SYMLINK libspdk_ut_mock.so 00:08:55.589 CXX lib/trace_parser/trace.o 00:08:55.589 CC lib/ioat/ioat.o 00:08:55.589 CC lib/dma/dma.o 00:08:55.589 CC lib/util/base64.o 00:08:55.589 CC lib/util/cpuset.o 00:08:55.589 CC lib/util/bit_array.o 00:08:55.589 CC lib/util/crc16.o 00:08:55.589 CC lib/util/crc32.o 00:08:55.589 CC lib/util/crc32c.o 00:08:55.589 CC lib/util/crc32_ieee.o 00:08:55.589 CC lib/util/crc64.o 00:08:55.589 CC lib/util/dif.o 00:08:55.589 CC lib/util/fd.o 00:08:55.589 CC lib/util/fd_group.o 00:08:55.589 CC lib/util/file.o 00:08:55.589 CC lib/util/hexlify.o 00:08:55.589 CC lib/util/net.o 00:08:55.589 CC lib/util/iov.o 00:08:55.589 CC lib/util/math.o 00:08:55.589 CC lib/util/pipe.o 00:08:55.589 CC lib/util/strerror_tls.o 00:08:55.589 CC lib/util/string.o 00:08:55.589 CC lib/util/uuid.o 00:08:55.589 CC lib/util/zipf.o 00:08:55.589 CC lib/util/xor.o 00:08:55.589 CC lib/util/md5.o 00:08:55.589 CC lib/vfio_user/host/vfio_user_pci.o 00:08:55.589 CC lib/vfio_user/host/vfio_user.o 00:08:55.847 LIB libspdk_dma.a 00:08:55.847 SO libspdk_dma.so.5.0 00:08:55.847 LIB libspdk_ioat.a 00:08:55.847 SYMLINK libspdk_dma.so 00:08:55.847 SO libspdk_ioat.so.7.0 00:08:56.105 LIB libspdk_vfio_user.a 00:08:56.105 SYMLINK libspdk_ioat.so 00:08:56.105 SO libspdk_vfio_user.so.5.0 00:08:56.105 SYMLINK libspdk_vfio_user.so 00:08:56.105 LIB libspdk_util.a 00:08:56.364 SO libspdk_util.so.10.1 00:08:56.364 SYMLINK libspdk_util.so 00:08:56.622 LIB libspdk_trace_parser.a 00:08:56.622 SO libspdk_trace_parser.so.6.0 00:08:56.622 SYMLINK libspdk_trace_parser.so 00:08:56.880 CC lib/json/json_parse.o 00:08:56.880 CC lib/vmd/led.o 00:08:56.880 CC lib/rdma_utils/rdma_utils.o 00:08:56.880 CC lib/json/json_util.o 00:08:56.880 CC lib/idxd/idxd_user.o 00:08:56.880 CC lib/json/json_write.o 00:08:56.880 CC lib/vmd/vmd.o 00:08:56.880 CC lib/idxd/idxd.o 00:08:56.880 CC lib/env_dpdk/env.o 00:08:56.880 CC lib/idxd/idxd_kernel.o 00:08:56.880 CC lib/env_dpdk/memory.o 00:08:56.880 CC lib/env_dpdk/pci.o 00:08:56.880 CC lib/env_dpdk/threads.o 00:08:56.880 CC lib/env_dpdk/init.o 00:08:56.880 CC lib/env_dpdk/pci_virtio.o 00:08:56.880 CC lib/env_dpdk/pci_ioat.o 00:08:56.880 CC lib/env_dpdk/pci_vmd.o 00:08:56.880 CC lib/env_dpdk/pci_idxd.o 00:08:56.880 CC lib/env_dpdk/pci_event.o 00:08:56.881 CC lib/env_dpdk/sigbus_handler.o 00:08:56.881 CC lib/env_dpdk/pci_dpdk.o 00:08:56.881 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:56.881 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:56.881 CC lib/conf/conf.o 00:08:57.139 LIB libspdk_conf.a 00:08:57.139 LIB libspdk_rdma_utils.a 00:08:57.139 SO libspdk_conf.so.6.0 00:08:57.139 LIB libspdk_json.a 00:08:57.139 SO libspdk_rdma_utils.so.1.0 00:08:57.139 SYMLINK libspdk_conf.so 00:08:57.139 SO libspdk_json.so.6.0 00:08:57.139 SYMLINK libspdk_rdma_utils.so 00:08:57.139 SYMLINK libspdk_json.so 00:08:57.396 LIB libspdk_idxd.a 00:08:57.396 SO libspdk_idxd.so.12.1 00:08:57.396 LIB libspdk_vmd.a 00:08:57.396 SO libspdk_vmd.so.6.0 00:08:57.655 CC lib/rdma_provider/common.o 00:08:57.655 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:57.655 SYMLINK libspdk_idxd.so 00:08:57.655 CC lib/jsonrpc/jsonrpc_client.o 00:08:57.655 CC lib/jsonrpc/jsonrpc_server.o 00:08:57.655 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:57.655 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:57.655 SYMLINK libspdk_vmd.so 00:08:57.655 LIB libspdk_rdma_provider.a 00:08:57.914 SO libspdk_rdma_provider.so.7.0 00:08:57.914 SYMLINK libspdk_rdma_provider.so 00:08:57.914 LIB libspdk_jsonrpc.a 00:08:57.914 SO libspdk_jsonrpc.so.6.0 00:08:57.914 SYMLINK libspdk_jsonrpc.so 00:08:58.172 LIB libspdk_env_dpdk.a 00:08:58.431 SO libspdk_env_dpdk.so.15.1 00:08:58.431 CC lib/rpc/rpc.o 00:08:58.431 SYMLINK libspdk_env_dpdk.so 00:08:58.431 LIB libspdk_rpc.a 00:08:58.690 SO libspdk_rpc.so.6.0 00:08:58.690 SYMLINK libspdk_rpc.so 00:08:58.948 CC lib/notify/notify.o 00:08:58.948 CC lib/notify/notify_rpc.o 00:08:58.948 CC lib/trace/trace.o 00:08:58.948 CC lib/trace/trace_flags.o 00:08:58.948 CC lib/trace/trace_rpc.o 00:08:58.948 CC lib/keyring/keyring.o 00:08:58.948 CC lib/keyring/keyring_rpc.o 00:08:59.206 LIB libspdk_notify.a 00:08:59.206 SO libspdk_notify.so.6.0 00:08:59.206 LIB libspdk_keyring.a 00:08:59.206 SO libspdk_keyring.so.2.0 00:08:59.206 LIB libspdk_trace.a 00:08:59.206 SYMLINK libspdk_notify.so 00:08:59.465 SO libspdk_trace.so.11.0 00:08:59.465 SYMLINK libspdk_keyring.so 00:08:59.465 SYMLINK libspdk_trace.so 00:08:59.722 CC lib/thread/thread.o 00:08:59.722 CC lib/thread/iobuf.o 00:08:59.722 CC lib/sock/sock.o 00:08:59.722 CC lib/sock/sock_rpc.o 00:08:59.979 LIB libspdk_sock.a 00:09:00.236 SO libspdk_sock.so.10.0 00:09:00.236 SYMLINK libspdk_sock.so 00:09:00.493 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:00.493 CC lib/nvme/nvme_ctrlr.o 00:09:00.493 CC lib/nvme/nvme_fabric.o 00:09:00.493 CC lib/nvme/nvme_ns_cmd.o 00:09:00.493 CC lib/nvme/nvme_ns.o 00:09:00.493 CC lib/nvme/nvme_pcie_common.o 00:09:00.493 CC lib/nvme/nvme_pcie.o 00:09:00.493 CC lib/nvme/nvme_qpair.o 00:09:00.493 CC lib/nvme/nvme_quirks.o 00:09:00.493 CC lib/nvme/nvme.o 00:09:00.493 CC lib/nvme/nvme_discovery.o 00:09:00.493 CC lib/nvme/nvme_transport.o 00:09:00.493 CC lib/nvme/nvme_tcp.o 00:09:00.493 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:00.493 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:00.493 CC lib/nvme/nvme_opal.o 00:09:00.493 CC lib/nvme/nvme_io_msg.o 00:09:00.494 CC lib/nvme/nvme_poll_group.o 00:09:00.494 CC lib/nvme/nvme_zns.o 00:09:00.494 CC lib/nvme/nvme_cuse.o 00:09:00.494 CC lib/nvme/nvme_stubs.o 00:09:00.494 CC lib/nvme/nvme_auth.o 00:09:00.494 CC lib/nvme/nvme_rdma.o 00:09:01.427 LIB libspdk_thread.a 00:09:01.427 SO libspdk_thread.so.11.0 00:09:01.427 SYMLINK libspdk_thread.so 00:09:01.717 CC lib/fsdev/fsdev.o 00:09:01.717 CC lib/fsdev/fsdev_io.o 00:09:01.717 CC lib/fsdev/fsdev_rpc.o 00:09:01.717 CC lib/init/json_config.o 00:09:01.717 CC lib/init/subsystem.o 00:09:01.717 CC lib/init/subsystem_rpc.o 00:09:01.717 CC lib/init/rpc.o 00:09:01.717 CC lib/accel/accel.o 00:09:01.717 CC lib/virtio/virtio.o 00:09:01.717 CC lib/virtio/virtio_vfio_user.o 00:09:01.717 CC lib/virtio/virtio_vhost_user.o 00:09:01.717 CC lib/accel/accel_rpc.o 00:09:01.717 CC lib/accel/accel_sw.o 00:09:01.717 CC lib/virtio/virtio_pci.o 00:09:01.717 CC lib/blob/blobstore.o 00:09:01.717 CC lib/blob/request.o 00:09:01.717 CC lib/blob/zeroes.o 00:09:01.717 CC lib/blob/blob_bs_dev.o 00:09:02.045 LIB libspdk_init.a 00:09:02.045 SO libspdk_init.so.6.0 00:09:02.045 LIB libspdk_virtio.a 00:09:02.333 SO libspdk_virtio.so.7.0 00:09:02.333 SYMLINK libspdk_init.so 00:09:02.333 SYMLINK libspdk_virtio.so 00:09:02.333 LIB libspdk_fsdev.a 00:09:02.592 SO libspdk_fsdev.so.2.0 00:09:02.592 CC lib/event/log_rpc.o 00:09:02.592 CC lib/event/app.o 00:09:02.592 CC lib/event/reactor.o 00:09:02.592 CC lib/event/app_rpc.o 00:09:02.592 CC lib/event/scheduler_static.o 00:09:02.592 SYMLINK libspdk_fsdev.so 00:09:02.849 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:02.849 LIB libspdk_nvme.a 00:09:02.849 LIB libspdk_accel.a 00:09:03.108 LIB libspdk_event.a 00:09:03.108 SO libspdk_accel.so.16.0 00:09:03.108 SO libspdk_nvme.so.15.0 00:09:03.108 SO libspdk_event.so.14.0 00:09:03.108 SYMLINK libspdk_accel.so 00:09:03.108 SYMLINK libspdk_event.so 00:09:03.367 SYMLINK libspdk_nvme.so 00:09:03.367 CC lib/bdev/bdev.o 00:09:03.367 CC lib/bdev/bdev_rpc.o 00:09:03.367 CC lib/bdev/scsi_nvme.o 00:09:03.367 CC lib/bdev/bdev_zone.o 00:09:03.367 CC lib/bdev/part.o 00:09:03.625 LIB libspdk_fuse_dispatcher.a 00:09:03.625 SO libspdk_fuse_dispatcher.so.1.0 00:09:03.625 SYMLINK libspdk_fuse_dispatcher.so 00:09:04.997 LIB libspdk_blob.a 00:09:04.997 SO libspdk_blob.so.12.0 00:09:04.997 SYMLINK libspdk_blob.so 00:09:05.563 CC lib/lvol/lvol.o 00:09:05.563 CC lib/blobfs/blobfs.o 00:09:05.563 CC lib/blobfs/tree.o 00:09:05.563 LIB libspdk_bdev.a 00:09:05.563 SO libspdk_bdev.so.17.0 00:09:05.563 SYMLINK libspdk_bdev.so 00:09:06.130 CC lib/scsi/dev.o 00:09:06.130 CC lib/ftl/ftl_core.o 00:09:06.130 CC lib/ftl/ftl_layout.o 00:09:06.130 CC lib/scsi/lun.o 00:09:06.130 CC lib/nbd/nbd.o 00:09:06.130 CC lib/nbd/nbd_rpc.o 00:09:06.130 CC lib/scsi/port.o 00:09:06.130 CC lib/ftl/ftl_init.o 00:09:06.130 CC lib/scsi/scsi.o 00:09:06.130 CC lib/scsi/scsi_rpc.o 00:09:06.130 CC lib/ftl/ftl_debug.o 00:09:06.130 CC lib/scsi/scsi_bdev.o 00:09:06.130 CC lib/ftl/ftl_io.o 00:09:06.130 CC lib/scsi/scsi_pr.o 00:09:06.130 CC lib/ftl/ftl_sb.o 00:09:06.130 CC lib/scsi/task.o 00:09:06.130 CC lib/ftl/ftl_l2p.o 00:09:06.130 CC lib/ftl/ftl_l2p_flat.o 00:09:06.130 CC lib/ftl/ftl_nv_cache.o 00:09:06.130 CC lib/ftl/ftl_band.o 00:09:06.130 CC lib/ftl/ftl_band_ops.o 00:09:06.130 CC lib/ftl/ftl_writer.o 00:09:06.130 CC lib/nvmf/ctrlr.o 00:09:06.130 CC lib/ftl/ftl_rq.o 00:09:06.130 CC lib/nvmf/ctrlr_bdev.o 00:09:06.130 CC lib/ftl/ftl_reloc.o 00:09:06.130 CC lib/nvmf/transport.o 00:09:06.130 CC lib/nvmf/nvmf_rpc.o 00:09:06.130 CC lib/ftl/ftl_l2p_cache.o 00:09:06.130 CC lib/nvmf/ctrlr_discovery.o 00:09:06.130 CC lib/nvmf/subsystem.o 00:09:06.130 CC lib/nvmf/nvmf.o 00:09:06.130 CC lib/ftl/ftl_p2l.o 00:09:06.130 CC lib/nvmf/tcp.o 00:09:06.130 CC lib/ftl/ftl_p2l_log.o 00:09:06.130 CC lib/nvmf/stubs.o 00:09:06.130 CC lib/nvmf/mdns_server.o 00:09:06.130 CC lib/nvmf/rdma.o 00:09:06.130 CC lib/ftl/mngt/ftl_mngt.o 00:09:06.130 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:06.130 CC lib/nvmf/auth.o 00:09:06.130 CC lib/ublk/ublk_rpc.o 00:09:06.130 CC lib/ublk/ublk.o 00:09:06.130 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:06.131 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:06.131 CC lib/ftl/utils/ftl_conf.o 00:09:06.131 CC lib/ftl/utils/ftl_md.o 00:09:06.131 CC lib/ftl/utils/ftl_mempool.o 00:09:06.131 CC lib/ftl/utils/ftl_bitmap.o 00:09:06.131 CC lib/ftl/utils/ftl_property.o 00:09:06.131 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:06.131 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:06.131 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:06.131 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:06.131 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:06.131 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:06.131 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:06.131 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:06.131 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:06.131 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:06.131 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:06.389 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:06.389 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:06.389 CC lib/ftl/base/ftl_base_dev.o 00:09:06.389 CC lib/ftl/base/ftl_base_bdev.o 00:09:06.389 CC lib/ftl/ftl_trace.o 00:09:06.389 LIB libspdk_blobfs.a 00:09:06.647 SO libspdk_blobfs.so.11.0 00:09:06.647 SYMLINK libspdk_blobfs.so 00:09:06.647 LIB libspdk_lvol.a 00:09:06.647 SO libspdk_lvol.so.11.0 00:09:06.647 LIB libspdk_nbd.a 00:09:06.647 SO libspdk_nbd.so.7.0 00:09:06.647 SYMLINK libspdk_lvol.so 00:09:06.965 SYMLINK libspdk_nbd.so 00:09:06.965 LIB libspdk_scsi.a 00:09:06.965 SO libspdk_scsi.so.9.0 00:09:06.965 LIB libspdk_ublk.a 00:09:06.965 SO libspdk_ublk.so.3.0 00:09:06.965 SYMLINK libspdk_scsi.so 00:09:06.965 SYMLINK libspdk_ublk.so 00:09:07.222 CC lib/iscsi/conn.o 00:09:07.222 CC lib/iscsi/init_grp.o 00:09:07.222 CC lib/iscsi/iscsi.o 00:09:07.222 CC lib/iscsi/tgt_node.o 00:09:07.222 CC lib/iscsi/param.o 00:09:07.222 CC lib/iscsi/portal_grp.o 00:09:07.222 CC lib/iscsi/iscsi_subsystem.o 00:09:07.222 CC lib/iscsi/iscsi_rpc.o 00:09:07.222 CC lib/iscsi/task.o 00:09:07.222 CC lib/vhost/vhost.o 00:09:07.222 CC lib/vhost/vhost_rpc.o 00:09:07.222 CC lib/vhost/vhost_scsi.o 00:09:07.222 CC lib/vhost/vhost_blk.o 00:09:07.222 CC lib/vhost/rte_vhost_user.o 00:09:07.222 LIB libspdk_ftl.a 00:09:07.483 SO libspdk_ftl.so.9.0 00:09:08.049 SYMLINK libspdk_ftl.so 00:09:08.049 LIB libspdk_nvmf.a 00:09:08.049 SO libspdk_nvmf.so.20.0 00:09:08.307 SYMLINK libspdk_nvmf.so 00:09:08.565 LIB libspdk_vhost.a 00:09:08.565 SO libspdk_vhost.so.8.0 00:09:08.565 SYMLINK libspdk_vhost.so 00:09:08.822 LIB libspdk_iscsi.a 00:09:08.822 SO libspdk_iscsi.so.8.0 00:09:09.080 SYMLINK libspdk_iscsi.so 00:09:09.649 CC module/env_dpdk/env_dpdk_rpc.o 00:09:09.649 CC module/accel/error/accel_error_rpc.o 00:09:09.649 CC module/accel/error/accel_error.o 00:09:09.649 CC module/accel/dsa/accel_dsa.o 00:09:09.649 CC module/accel/dsa/accel_dsa_rpc.o 00:09:09.649 CC module/scheduler/gscheduler/gscheduler.o 00:09:09.649 CC module/keyring/file/keyring.o 00:09:09.649 CC module/keyring/file/keyring_rpc.o 00:09:09.649 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:09.649 CC module/keyring/linux/keyring_rpc.o 00:09:09.649 CC module/keyring/linux/keyring.o 00:09:09.649 CC module/fsdev/aio/fsdev_aio.o 00:09:09.649 CC module/accel/ioat/accel_ioat_rpc.o 00:09:09.649 LIB libspdk_env_dpdk_rpc.a 00:09:09.649 CC module/accel/iaa/accel_iaa.o 00:09:09.649 CC module/accel/ioat/accel_ioat.o 00:09:09.649 CC module/fsdev/aio/linux_aio_mgr.o 00:09:09.649 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:09.649 CC module/accel/iaa/accel_iaa_rpc.o 00:09:09.649 CC module/sock/posix/posix.o 00:09:09.649 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:09.649 CC module/blob/bdev/blob_bdev.o 00:09:09.649 SO libspdk_env_dpdk_rpc.so.6.0 00:09:09.649 SYMLINK libspdk_env_dpdk_rpc.so 00:09:09.906 LIB libspdk_scheduler_gscheduler.a 00:09:09.906 LIB libspdk_keyring_linux.a 00:09:09.906 LIB libspdk_keyring_file.a 00:09:09.906 LIB libspdk_accel_iaa.a 00:09:09.906 SO libspdk_keyring_linux.so.1.0 00:09:09.906 LIB libspdk_scheduler_dpdk_governor.a 00:09:09.906 LIB libspdk_accel_error.a 00:09:09.906 SO libspdk_scheduler_gscheduler.so.4.0 00:09:09.906 SO libspdk_keyring_file.so.2.0 00:09:09.906 SO libspdk_accel_iaa.so.3.0 00:09:09.906 LIB libspdk_scheduler_dynamic.a 00:09:09.906 LIB libspdk_accel_ioat.a 00:09:09.906 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:09.906 SO libspdk_accel_error.so.2.0 00:09:09.906 SO libspdk_accel_ioat.so.6.0 00:09:09.906 SYMLINK libspdk_keyring_file.so 00:09:09.906 SYMLINK libspdk_keyring_linux.so 00:09:09.906 SO libspdk_scheduler_dynamic.so.4.0 00:09:09.906 SYMLINK libspdk_scheduler_gscheduler.so 00:09:09.906 SYMLINK libspdk_accel_error.so 00:09:09.906 SYMLINK libspdk_accel_iaa.so 00:09:09.906 LIB libspdk_accel_dsa.a 00:09:09.906 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:09.906 SYMLINK libspdk_accel_ioat.so 00:09:09.906 SYMLINK libspdk_scheduler_dynamic.so 00:09:09.906 SO libspdk_accel_dsa.so.5.0 00:09:09.906 LIB libspdk_blob_bdev.a 00:09:10.164 SO libspdk_blob_bdev.so.12.0 00:09:10.164 SYMLINK libspdk_accel_dsa.so 00:09:10.165 SYMLINK libspdk_blob_bdev.so 00:09:10.423 LIB libspdk_sock_posix.a 00:09:10.423 SO libspdk_sock_posix.so.6.0 00:09:10.423 LIB libspdk_fsdev_aio.a 00:09:10.423 SYMLINK libspdk_sock_posix.so 00:09:10.423 SO libspdk_fsdev_aio.so.1.0 00:09:10.423 SYMLINK libspdk_fsdev_aio.so 00:09:10.423 CC module/bdev/lvol/vbdev_lvol.o 00:09:10.423 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:10.423 CC module/bdev/malloc/bdev_malloc.o 00:09:10.423 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:10.423 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:10.423 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:10.423 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:10.423 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:10.423 CC module/bdev/delay/vbdev_delay.o 00:09:10.423 CC module/bdev/passthru/vbdev_passthru.o 00:09:10.423 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:10.681 CC module/bdev/null/bdev_null.o 00:09:10.681 CC module/bdev/null/bdev_null_rpc.o 00:09:10.681 CC module/bdev/error/vbdev_error.o 00:09:10.681 CC module/bdev/error/vbdev_error_rpc.o 00:09:10.681 CC module/bdev/gpt/gpt.o 00:09:10.681 CC module/bdev/gpt/vbdev_gpt.o 00:09:10.681 CC module/bdev/ftl/bdev_ftl.o 00:09:10.681 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:10.681 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:10.681 CC module/bdev/split/vbdev_split.o 00:09:10.681 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:10.681 CC module/bdev/aio/bdev_aio.o 00:09:10.681 CC module/bdev/aio/bdev_aio_rpc.o 00:09:10.681 CC module/bdev/split/vbdev_split_rpc.o 00:09:10.681 CC module/bdev/raid/bdev_raid.o 00:09:10.681 CC module/bdev/raid/bdev_raid_rpc.o 00:09:10.681 CC module/bdev/raid/bdev_raid_sb.o 00:09:10.681 CC module/bdev/iscsi/bdev_iscsi.o 00:09:10.681 CC module/bdev/nvme/bdev_nvme.o 00:09:10.681 CC module/bdev/raid/raid0.o 00:09:10.681 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:10.681 CC module/bdev/raid/concat.o 00:09:10.681 CC module/bdev/raid/raid1.o 00:09:10.681 CC module/bdev/nvme/vbdev_opal.o 00:09:10.681 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:10.681 CC module/bdev/nvme/bdev_mdns_client.o 00:09:10.681 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:10.681 CC module/bdev/nvme/nvme_rpc.o 00:09:10.681 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:10.681 CC module/blobfs/bdev/blobfs_bdev.o 00:09:10.681 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:10.681 CC module/bdev/ocf/ctx.o 00:09:10.681 CC module/bdev/ocf/data.o 00:09:10.681 CC module/bdev/ocf/utils.o 00:09:10.681 CC module/bdev/ocf/stats.o 00:09:10.681 CC module/bdev/ocf/vbdev_ocf.o 00:09:10.681 CC module/bdev/ocf/vbdev_ocf_rpc.o 00:09:10.681 CC module/bdev/ocf/volume.o 00:09:10.939 LIB libspdk_bdev_split.a 00:09:10.939 SO libspdk_bdev_split.so.6.0 00:09:10.939 LIB libspdk_bdev_error.a 00:09:10.939 LIB libspdk_bdev_passthru.a 00:09:10.939 SO libspdk_bdev_error.so.6.0 00:09:10.939 LIB libspdk_blobfs_bdev.a 00:09:10.939 SO libspdk_bdev_passthru.so.6.0 00:09:10.939 LIB libspdk_bdev_gpt.a 00:09:10.939 LIB libspdk_bdev_aio.a 00:09:10.939 SYMLINK libspdk_bdev_split.so 00:09:10.939 SO libspdk_blobfs_bdev.so.6.0 00:09:10.939 LIB libspdk_bdev_iscsi.a 00:09:10.939 SO libspdk_bdev_gpt.so.6.0 00:09:10.939 SYMLINK libspdk_bdev_error.so 00:09:10.939 SO libspdk_bdev_aio.so.6.0 00:09:10.939 LIB libspdk_bdev_ftl.a 00:09:10.939 LIB libspdk_bdev_delay.a 00:09:10.939 LIB libspdk_bdev_virtio.a 00:09:10.939 SYMLINK libspdk_bdev_passthru.so 00:09:10.939 SO libspdk_bdev_iscsi.so.6.0 00:09:10.939 SYMLINK libspdk_blobfs_bdev.so 00:09:10.939 LIB libspdk_bdev_null.a 00:09:11.197 SO libspdk_bdev_delay.so.6.0 00:09:11.197 SO libspdk_bdev_ftl.so.6.0 00:09:11.197 SO libspdk_bdev_virtio.so.6.0 00:09:11.197 SYMLINK libspdk_bdev_gpt.so 00:09:11.197 LIB libspdk_bdev_zone_block.a 00:09:11.197 SO libspdk_bdev_null.so.6.0 00:09:11.197 SYMLINK libspdk_bdev_aio.so 00:09:11.197 SYMLINK libspdk_bdev_iscsi.so 00:09:11.197 SO libspdk_bdev_zone_block.so.6.0 00:09:11.197 LIB libspdk_bdev_malloc.a 00:09:11.197 SYMLINK libspdk_bdev_ftl.so 00:09:11.197 SYMLINK libspdk_bdev_delay.so 00:09:11.197 LIB libspdk_bdev_lvol.a 00:09:11.197 SYMLINK libspdk_bdev_null.so 00:09:11.197 SYMLINK libspdk_bdev_virtio.so 00:09:11.197 SO libspdk_bdev_malloc.so.6.0 00:09:11.197 SYMLINK libspdk_bdev_zone_block.so 00:09:11.197 SO libspdk_bdev_lvol.so.6.0 00:09:11.197 SYMLINK libspdk_bdev_malloc.so 00:09:11.197 SYMLINK libspdk_bdev_lvol.so 00:09:11.197 LIB libspdk_bdev_ocf.a 00:09:11.456 SO libspdk_bdev_ocf.so.6.0 00:09:11.456 SYMLINK libspdk_bdev_ocf.so 00:09:11.714 LIB libspdk_bdev_raid.a 00:09:11.973 SO libspdk_bdev_raid.so.6.0 00:09:11.973 SYMLINK libspdk_bdev_raid.so 00:09:13.349 LIB libspdk_bdev_nvme.a 00:09:13.349 SO libspdk_bdev_nvme.so.7.1 00:09:13.607 SYMLINK libspdk_bdev_nvme.so 00:09:14.174 CC module/event/subsystems/keyring/keyring.o 00:09:14.174 CC module/event/subsystems/vmd/vmd.o 00:09:14.174 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:14.174 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:14.174 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:14.174 CC module/event/subsystems/iobuf/iobuf.o 00:09:14.174 CC module/event/subsystems/scheduler/scheduler.o 00:09:14.174 CC module/event/subsystems/sock/sock.o 00:09:14.174 CC module/event/subsystems/fsdev/fsdev.o 00:09:14.432 LIB libspdk_event_keyring.a 00:09:14.432 LIB libspdk_event_vmd.a 00:09:14.432 SO libspdk_event_keyring.so.1.0 00:09:14.432 LIB libspdk_event_fsdev.a 00:09:14.432 LIB libspdk_event_vhost_blk.a 00:09:14.432 LIB libspdk_event_iobuf.a 00:09:14.432 LIB libspdk_event_scheduler.a 00:09:14.432 LIB libspdk_event_sock.a 00:09:14.432 SO libspdk_event_vmd.so.6.0 00:09:14.432 SO libspdk_event_iobuf.so.3.0 00:09:14.432 SO libspdk_event_fsdev.so.1.0 00:09:14.432 SO libspdk_event_vhost_blk.so.3.0 00:09:14.432 SO libspdk_event_scheduler.so.4.0 00:09:14.432 SO libspdk_event_sock.so.5.0 00:09:14.432 SYMLINK libspdk_event_keyring.so 00:09:14.432 SYMLINK libspdk_event_fsdev.so 00:09:14.432 SYMLINK libspdk_event_vmd.so 00:09:14.432 SYMLINK libspdk_event_iobuf.so 00:09:14.432 SYMLINK libspdk_event_vhost_blk.so 00:09:14.432 SYMLINK libspdk_event_scheduler.so 00:09:14.432 SYMLINK libspdk_event_sock.so 00:09:14.997 CC module/event/subsystems/accel/accel.o 00:09:14.997 LIB libspdk_event_accel.a 00:09:14.997 SO libspdk_event_accel.so.6.0 00:09:14.997 SYMLINK libspdk_event_accel.so 00:09:15.565 CC module/event/subsystems/bdev/bdev.o 00:09:15.565 LIB libspdk_event_bdev.a 00:09:15.565 SO libspdk_event_bdev.so.6.0 00:09:15.823 SYMLINK libspdk_event_bdev.so 00:09:16.082 CC module/event/subsystems/scsi/scsi.o 00:09:16.082 CC module/event/subsystems/ublk/ublk.o 00:09:16.082 CC module/event/subsystems/nbd/nbd.o 00:09:16.082 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:16.082 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:16.340 LIB libspdk_event_scsi.a 00:09:16.340 LIB libspdk_event_ublk.a 00:09:16.340 LIB libspdk_event_nbd.a 00:09:16.340 SO libspdk_event_nbd.so.6.0 00:09:16.340 SO libspdk_event_scsi.so.6.0 00:09:16.340 SO libspdk_event_ublk.so.3.0 00:09:16.340 LIB libspdk_event_nvmf.a 00:09:16.340 SO libspdk_event_nvmf.so.6.0 00:09:16.340 SYMLINK libspdk_event_nbd.so 00:09:16.340 SYMLINK libspdk_event_scsi.so 00:09:16.340 SYMLINK libspdk_event_ublk.so 00:09:16.340 SYMLINK libspdk_event_nvmf.so 00:09:16.599 CC module/event/subsystems/iscsi/iscsi.o 00:09:16.599 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:16.857 LIB libspdk_event_vhost_scsi.a 00:09:16.857 LIB libspdk_event_iscsi.a 00:09:16.857 SO libspdk_event_vhost_scsi.so.3.0 00:09:16.857 SO libspdk_event_iscsi.so.6.0 00:09:16.857 SYMLINK libspdk_event_vhost_scsi.so 00:09:16.857 SYMLINK libspdk_event_iscsi.so 00:09:17.114 SO libspdk.so.6.0 00:09:17.114 SYMLINK libspdk.so 00:09:17.684 CC app/trace_record/trace_record.o 00:09:17.684 CC app/spdk_lspci/spdk_lspci.o 00:09:17.684 CC app/spdk_nvme_identify/identify.o 00:09:17.684 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:17.684 TEST_HEADER include/spdk/accel.h 00:09:17.684 TEST_HEADER include/spdk/accel_module.h 00:09:17.684 TEST_HEADER include/spdk/assert.h 00:09:17.684 TEST_HEADER include/spdk/base64.h 00:09:17.684 TEST_HEADER include/spdk/barrier.h 00:09:17.684 CC app/spdk_nvme_discover/discovery_aer.o 00:09:17.684 TEST_HEADER include/spdk/bdev.h 00:09:17.684 CC app/spdk_nvme_perf/perf.o 00:09:17.684 CXX app/trace/trace.o 00:09:17.684 CC test/rpc_client/rpc_client_test.o 00:09:17.684 TEST_HEADER include/spdk/bdev_module.h 00:09:17.684 TEST_HEADER include/spdk/bdev_zone.h 00:09:17.684 TEST_HEADER include/spdk/bit_array.h 00:09:17.684 TEST_HEADER include/spdk/blob_bdev.h 00:09:17.684 TEST_HEADER include/spdk/bit_pool.h 00:09:17.684 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:17.684 TEST_HEADER include/spdk/blobfs.h 00:09:17.684 TEST_HEADER include/spdk/blob.h 00:09:17.684 TEST_HEADER include/spdk/conf.h 00:09:17.684 TEST_HEADER include/spdk/config.h 00:09:17.684 TEST_HEADER include/spdk/cpuset.h 00:09:17.684 CC app/spdk_top/spdk_top.o 00:09:17.684 TEST_HEADER include/spdk/crc32.h 00:09:17.684 TEST_HEADER include/spdk/crc16.h 00:09:17.684 TEST_HEADER include/spdk/dif.h 00:09:17.684 TEST_HEADER include/spdk/crc64.h 00:09:17.684 TEST_HEADER include/spdk/dma.h 00:09:17.684 TEST_HEADER include/spdk/endian.h 00:09:17.684 TEST_HEADER include/spdk/env_dpdk.h 00:09:17.684 TEST_HEADER include/spdk/env.h 00:09:17.684 TEST_HEADER include/spdk/event.h 00:09:17.684 TEST_HEADER include/spdk/fd.h 00:09:17.684 TEST_HEADER include/spdk/fd_group.h 00:09:17.684 TEST_HEADER include/spdk/file.h 00:09:17.684 TEST_HEADER include/spdk/fsdev.h 00:09:17.684 TEST_HEADER include/spdk/fsdev_module.h 00:09:17.684 TEST_HEADER include/spdk/ftl.h 00:09:17.684 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:17.684 TEST_HEADER include/spdk/gpt_spec.h 00:09:17.684 TEST_HEADER include/spdk/hexlify.h 00:09:17.684 TEST_HEADER include/spdk/histogram_data.h 00:09:17.684 TEST_HEADER include/spdk/idxd.h 00:09:17.684 TEST_HEADER include/spdk/idxd_spec.h 00:09:17.684 TEST_HEADER include/spdk/init.h 00:09:17.684 TEST_HEADER include/spdk/ioat.h 00:09:17.684 TEST_HEADER include/spdk/ioat_spec.h 00:09:17.684 TEST_HEADER include/spdk/iscsi_spec.h 00:09:17.684 TEST_HEADER include/spdk/json.h 00:09:17.684 TEST_HEADER include/spdk/jsonrpc.h 00:09:17.684 TEST_HEADER include/spdk/keyring.h 00:09:17.684 CC app/spdk_dd/spdk_dd.o 00:09:17.684 TEST_HEADER include/spdk/keyring_module.h 00:09:17.684 TEST_HEADER include/spdk/log.h 00:09:17.684 TEST_HEADER include/spdk/likely.h 00:09:17.684 TEST_HEADER include/spdk/lvol.h 00:09:17.684 TEST_HEADER include/spdk/md5.h 00:09:17.684 TEST_HEADER include/spdk/memory.h 00:09:17.684 TEST_HEADER include/spdk/mmio.h 00:09:17.684 TEST_HEADER include/spdk/nbd.h 00:09:17.684 TEST_HEADER include/spdk/net.h 00:09:17.684 TEST_HEADER include/spdk/notify.h 00:09:17.684 TEST_HEADER include/spdk/nvme.h 00:09:17.684 TEST_HEADER include/spdk/nvme_intel.h 00:09:17.684 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:17.684 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:17.684 TEST_HEADER include/spdk/nvme_spec.h 00:09:17.685 TEST_HEADER include/spdk/nvme_zns.h 00:09:17.685 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:17.685 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:17.685 TEST_HEADER include/spdk/nvmf.h 00:09:17.685 TEST_HEADER include/spdk/nvmf_spec.h 00:09:17.685 TEST_HEADER include/spdk/nvmf_transport.h 00:09:17.685 TEST_HEADER include/spdk/opal.h 00:09:17.685 TEST_HEADER include/spdk/opal_spec.h 00:09:17.685 CC app/spdk_tgt/spdk_tgt.o 00:09:17.685 TEST_HEADER include/spdk/pci_ids.h 00:09:17.685 TEST_HEADER include/spdk/pipe.h 00:09:17.685 TEST_HEADER include/spdk/queue.h 00:09:17.685 TEST_HEADER include/spdk/reduce.h 00:09:17.685 TEST_HEADER include/spdk/rpc.h 00:09:17.685 TEST_HEADER include/spdk/scsi.h 00:09:17.685 TEST_HEADER include/spdk/scheduler.h 00:09:17.685 TEST_HEADER include/spdk/scsi_spec.h 00:09:17.685 TEST_HEADER include/spdk/stdinc.h 00:09:17.685 TEST_HEADER include/spdk/sock.h 00:09:17.685 CC app/iscsi_tgt/iscsi_tgt.o 00:09:17.685 TEST_HEADER include/spdk/string.h 00:09:17.685 TEST_HEADER include/spdk/thread.h 00:09:17.685 TEST_HEADER include/spdk/trace.h 00:09:17.685 TEST_HEADER include/spdk/trace_parser.h 00:09:17.685 TEST_HEADER include/spdk/tree.h 00:09:17.685 CC app/nvmf_tgt/nvmf_main.o 00:09:17.685 TEST_HEADER include/spdk/ublk.h 00:09:17.685 TEST_HEADER include/spdk/util.h 00:09:17.685 TEST_HEADER include/spdk/version.h 00:09:17.685 TEST_HEADER include/spdk/uuid.h 00:09:17.685 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:17.685 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:17.685 TEST_HEADER include/spdk/vmd.h 00:09:17.685 TEST_HEADER include/spdk/vhost.h 00:09:17.685 TEST_HEADER include/spdk/xor.h 00:09:17.685 TEST_HEADER include/spdk/zipf.h 00:09:17.685 CXX test/cpp_headers/accel_module.o 00:09:17.685 CXX test/cpp_headers/accel.o 00:09:17.685 CXX test/cpp_headers/assert.o 00:09:17.685 CXX test/cpp_headers/base64.o 00:09:17.685 CXX test/cpp_headers/barrier.o 00:09:17.685 CXX test/cpp_headers/bdev.o 00:09:17.685 CXX test/cpp_headers/bdev_module.o 00:09:17.685 CXX test/cpp_headers/bit_array.o 00:09:17.685 CXX test/cpp_headers/bdev_zone.o 00:09:17.685 CXX test/cpp_headers/bit_pool.o 00:09:17.685 CXX test/cpp_headers/blob_bdev.o 00:09:17.685 CXX test/cpp_headers/blobfs_bdev.o 00:09:17.685 CXX test/cpp_headers/blobfs.o 00:09:17.685 CXX test/cpp_headers/blob.o 00:09:17.685 CXX test/cpp_headers/conf.o 00:09:17.685 CXX test/cpp_headers/config.o 00:09:17.685 CXX test/cpp_headers/cpuset.o 00:09:17.685 CXX test/cpp_headers/crc16.o 00:09:17.685 CXX test/cpp_headers/crc32.o 00:09:17.685 CXX test/cpp_headers/crc64.o 00:09:17.685 CXX test/cpp_headers/dif.o 00:09:17.685 CXX test/cpp_headers/endian.o 00:09:17.685 CXX test/cpp_headers/dma.o 00:09:17.685 CXX test/cpp_headers/env.o 00:09:17.685 CXX test/cpp_headers/env_dpdk.o 00:09:17.685 CXX test/cpp_headers/event.o 00:09:17.685 CC examples/ioat/verify/verify.o 00:09:17.685 CXX test/cpp_headers/fd_group.o 00:09:17.685 CXX test/cpp_headers/fd.o 00:09:17.685 CXX test/cpp_headers/file.o 00:09:17.685 CXX test/cpp_headers/fsdev.o 00:09:17.685 CXX test/cpp_headers/fsdev_module.o 00:09:17.685 CXX test/cpp_headers/ftl.o 00:09:17.685 CXX test/cpp_headers/fuse_dispatcher.o 00:09:17.685 CXX test/cpp_headers/gpt_spec.o 00:09:17.685 CXX test/cpp_headers/hexlify.o 00:09:17.685 CXX test/cpp_headers/histogram_data.o 00:09:17.685 CC examples/ioat/perf/perf.o 00:09:17.685 CXX test/cpp_headers/idxd.o 00:09:17.685 CXX test/cpp_headers/idxd_spec.o 00:09:17.685 CXX test/cpp_headers/init.o 00:09:17.685 CXX test/cpp_headers/ioat.o 00:09:17.685 CXX test/cpp_headers/ioat_spec.o 00:09:17.685 CXX test/cpp_headers/iscsi_spec.o 00:09:17.685 CC examples/util/zipf/zipf.o 00:09:17.685 CXX test/cpp_headers/json.o 00:09:17.685 CC test/env/pci/pci_ut.o 00:09:17.685 CC app/fio/nvme/fio_plugin.o 00:09:17.685 CC test/env/memory/memory_ut.o 00:09:17.685 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:17.685 CC test/app/stub/stub.o 00:09:17.685 CC test/env/vtophys/vtophys.o 00:09:17.685 CC test/thread/poller_perf/poller_perf.o 00:09:17.685 CC test/app/jsoncat/jsoncat.o 00:09:17.685 CC app/fio/bdev/fio_plugin.o 00:09:17.685 CC test/app/histogram_perf/histogram_perf.o 00:09:17.685 CC test/dma/test_dma/test_dma.o 00:09:17.948 LINK spdk_lspci 00:09:17.948 CC test/app/bdev_svc/bdev_svc.o 00:09:17.948 LINK spdk_nvme_discover 00:09:17.948 CC test/env/mem_callbacks/mem_callbacks.o 00:09:17.948 LINK interrupt_tgt 00:09:18.207 LINK rpc_client_test 00:09:18.207 LINK spdk_tgt 00:09:18.207 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:18.207 LINK zipf 00:09:18.207 LINK iscsi_tgt 00:09:18.207 CXX test/cpp_headers/jsonrpc.o 00:09:18.207 CXX test/cpp_headers/keyring.o 00:09:18.207 LINK spdk_trace_record 00:09:18.207 LINK jsoncat 00:09:18.207 LINK verify 00:09:18.207 LINK env_dpdk_post_init 00:09:18.207 CXX test/cpp_headers/keyring_module.o 00:09:18.207 LINK nvmf_tgt 00:09:18.207 CXX test/cpp_headers/likely.o 00:09:18.207 LINK vtophys 00:09:18.207 CXX test/cpp_headers/log.o 00:09:18.207 CXX test/cpp_headers/lvol.o 00:09:18.207 CXX test/cpp_headers/md5.o 00:09:18.207 LINK ioat_perf 00:09:18.207 CXX test/cpp_headers/memory.o 00:09:18.207 CXX test/cpp_headers/mmio.o 00:09:18.207 LINK stub 00:09:18.207 LINK poller_perf 00:09:18.207 CXX test/cpp_headers/nbd.o 00:09:18.207 CXX test/cpp_headers/net.o 00:09:18.207 CXX test/cpp_headers/notify.o 00:09:18.207 CXX test/cpp_headers/nvme.o 00:09:18.207 CXX test/cpp_headers/nvme_intel.o 00:09:18.207 CXX test/cpp_headers/nvme_ocssd.o 00:09:18.207 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:18.207 CXX test/cpp_headers/nvme_spec.o 00:09:18.207 CXX test/cpp_headers/nvme_zns.o 00:09:18.207 CXX test/cpp_headers/nvmf_cmd.o 00:09:18.207 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:18.207 CXX test/cpp_headers/nvmf.o 00:09:18.207 CXX test/cpp_headers/nvmf_spec.o 00:09:18.207 CXX test/cpp_headers/nvmf_transport.o 00:09:18.207 CXX test/cpp_headers/opal.o 00:09:18.207 CXX test/cpp_headers/opal_spec.o 00:09:18.207 CXX test/cpp_headers/pci_ids.o 00:09:18.207 LINK histogram_perf 00:09:18.207 CXX test/cpp_headers/pipe.o 00:09:18.207 CXX test/cpp_headers/queue.o 00:09:18.470 CXX test/cpp_headers/rpc.o 00:09:18.470 CXX test/cpp_headers/reduce.o 00:09:18.470 CXX test/cpp_headers/scheduler.o 00:09:18.470 LINK bdev_svc 00:09:18.470 CXX test/cpp_headers/sock.o 00:09:18.470 CXX test/cpp_headers/scsi.o 00:09:18.470 CXX test/cpp_headers/stdinc.o 00:09:18.470 CXX test/cpp_headers/string.o 00:09:18.470 CXX test/cpp_headers/scsi_spec.o 00:09:18.470 CXX test/cpp_headers/thread.o 00:09:18.470 CXX test/cpp_headers/trace.o 00:09:18.470 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:18.470 CXX test/cpp_headers/trace_parser.o 00:09:18.470 CXX test/cpp_headers/tree.o 00:09:18.470 CXX test/cpp_headers/ublk.o 00:09:18.470 CXX test/cpp_headers/util.o 00:09:18.470 CXX test/cpp_headers/uuid.o 00:09:18.470 CXX test/cpp_headers/version.o 00:09:18.470 CXX test/cpp_headers/vfio_user_pci.o 00:09:18.470 CXX test/cpp_headers/vfio_user_spec.o 00:09:18.470 LINK spdk_trace 00:09:18.470 CXX test/cpp_headers/vhost.o 00:09:18.470 CXX test/cpp_headers/vmd.o 00:09:18.470 CXX test/cpp_headers/xor.o 00:09:18.470 CXX test/cpp_headers/zipf.o 00:09:18.470 LINK spdk_dd 00:09:18.470 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:18.731 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:18.731 LINK pci_ut 00:09:18.731 LINK nvme_fuzz 00:09:18.989 LINK spdk_bdev 00:09:18.989 LINK spdk_nvme_perf 00:09:18.989 CC test/event/reactor/reactor.o 00:09:18.989 CC examples/vmd/led/led.o 00:09:18.989 CC examples/sock/hello_world/hello_sock.o 00:09:18.989 CC test/event/event_perf/event_perf.o 00:09:18.989 CC test/event/reactor_perf/reactor_perf.o 00:09:18.989 CC examples/vmd/lsvmd/lsvmd.o 00:09:18.989 LINK spdk_nvme 00:09:18.989 CC examples/idxd/perf/perf.o 00:09:18.989 CC test/event/app_repeat/app_repeat.o 00:09:18.989 LINK test_dma 00:09:18.989 CC examples/thread/thread/thread_ex.o 00:09:18.989 CC test/event/scheduler/scheduler.o 00:09:18.989 CC app/vhost/vhost.o 00:09:18.989 LINK spdk_nvme_identify 00:09:18.989 LINK lsvmd 00:09:19.247 LINK event_perf 00:09:19.247 LINK led 00:09:19.247 LINK reactor 00:09:19.247 LINK mem_callbacks 00:09:19.247 LINK reactor_perf 00:09:19.247 LINK app_repeat 00:09:19.247 LINK spdk_top 00:09:19.247 LINK vhost_fuzz 00:09:19.247 LINK vhost 00:09:19.247 LINK scheduler 00:09:19.247 LINK hello_sock 00:09:19.247 LINK thread 00:09:19.247 LINK idxd_perf 00:09:19.811 LINK memory_ut 00:09:19.811 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:19.811 CC examples/nvme/arbitration/arbitration.o 00:09:19.811 CC examples/nvme/reconnect/reconnect.o 00:09:19.811 CC examples/nvme/hello_world/hello_world.o 00:09:19.811 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:19.811 CC examples/nvme/abort/abort.o 00:09:19.811 CC examples/nvme/hotplug/hotplug.o 00:09:19.811 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:20.070 CC test/nvme/err_injection/err_injection.o 00:09:20.070 CC test/nvme/reset/reset.o 00:09:20.070 CC test/nvme/aer/aer.o 00:09:20.070 CC test/nvme/overhead/overhead.o 00:09:20.070 CC test/nvme/fused_ordering/fused_ordering.o 00:09:20.070 CC test/nvme/compliance/nvme_compliance.o 00:09:20.070 CC test/nvme/sgl/sgl.o 00:09:20.070 CC test/nvme/cuse/cuse.o 00:09:20.070 CC test/nvme/simple_copy/simple_copy.o 00:09:20.070 CC test/nvme/startup/startup.o 00:09:20.070 CC test/nvme/e2edp/nvme_dp.o 00:09:20.070 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:20.070 CC test/nvme/connect_stress/connect_stress.o 00:09:20.070 CC test/nvme/fdp/fdp.o 00:09:20.070 CC test/nvme/reserve/reserve.o 00:09:20.070 CC test/nvme/boot_partition/boot_partition.o 00:09:20.070 CC test/accel/dif/dif.o 00:09:20.070 LINK cmb_copy 00:09:20.070 LINK hello_world 00:09:20.070 CC test/blobfs/mkfs/mkfs.o 00:09:20.070 LINK pmr_persistence 00:09:20.070 CC test/lvol/esnap/esnap.o 00:09:20.070 LINK arbitration 00:09:20.070 LINK hotplug 00:09:20.070 LINK startup 00:09:20.070 LINK connect_stress 00:09:20.070 LINK boot_partition 00:09:20.070 LINK err_injection 00:09:20.328 LINK doorbell_aers 00:09:20.328 LINK fused_ordering 00:09:20.328 LINK reconnect 00:09:20.328 LINK abort 00:09:20.328 LINK sgl 00:09:20.328 CC examples/accel/perf/accel_perf.o 00:09:20.328 LINK overhead 00:09:20.328 LINK reserve 00:09:20.328 LINK reset 00:09:20.328 LINK simple_copy 00:09:20.328 LINK mkfs 00:09:20.328 LINK aer 00:09:20.328 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:20.328 LINK nvme_dp 00:09:20.328 CC examples/blob/hello_world/hello_blob.o 00:09:20.328 CC examples/blob/cli/blobcli.o 00:09:20.328 LINK fdp 00:09:20.328 LINK nvme_compliance 00:09:20.328 LINK nvme_manage 00:09:20.585 LINK iscsi_fuzz 00:09:20.585 LINK hello_blob 00:09:20.585 LINK hello_fsdev 00:09:20.585 LINK dif 00:09:20.842 LINK accel_perf 00:09:20.842 LINK blobcli 00:09:21.408 CC examples/bdev/bdevperf/bdevperf.o 00:09:21.408 CC examples/bdev/hello_world/hello_bdev.o 00:09:21.408 LINK cuse 00:09:21.667 LINK hello_bdev 00:09:21.667 CC test/bdev/bdevio/bdevio.o 00:09:22.232 LINK bdevperf 00:09:22.232 LINK bdevio 00:09:23.168 CC examples/nvmf/nvmf/nvmf.o 00:09:23.427 LINK nvmf 00:09:23.995 LINK esnap 00:09:24.562 00:09:24.562 real 1m12.592s 00:09:24.562 user 9m54.322s 00:09:24.562 sys 3m49.966s 00:09:24.562 13:40:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:24.562 13:40:55 make -- common/autotest_common.sh@10 -- $ set +x 00:09:24.562 ************************************ 00:09:24.562 END TEST make 00:09:24.562 ************************************ 00:09:24.562 13:40:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:24.562 13:40:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:24.562 13:40:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:24.562 13:40:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.562 13:40:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:24.562 13:40:55 -- pm/common@44 -- $ pid=3738607 00:09:24.562 13:40:55 -- pm/common@50 -- $ kill -TERM 3738607 00:09:24.562 13:40:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.562 13:40:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:24.562 13:40:55 -- pm/common@44 -- $ pid=3738609 00:09:24.562 13:40:55 -- pm/common@50 -- $ kill -TERM 3738609 00:09:24.562 13:40:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.562 13:40:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:24.562 13:40:55 -- pm/common@44 -- $ pid=3738611 00:09:24.562 13:40:55 -- pm/common@50 -- $ kill -TERM 3738611 00:09:24.562 13:40:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.562 13:40:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:24.562 13:40:55 -- pm/common@44 -- $ pid=3738637 00:09:24.562 13:40:55 -- pm/common@50 -- $ sudo -E kill -TERM 3738637 00:09:24.563 13:40:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:24.563 13:40:55 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:09:24.563 13:40:56 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.563 13:40:56 -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.563 13:40:56 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.822 13:40:56 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.822 13:40:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.822 13:40:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.822 13:40:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.822 13:40:56 -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.822 13:40:56 -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.822 13:40:56 -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.822 13:40:56 -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.822 13:40:56 -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.822 13:40:56 -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.822 13:40:56 -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.822 13:40:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.822 13:40:56 -- scripts/common.sh@344 -- # case "$op" in 00:09:24.822 13:40:56 -- scripts/common.sh@345 -- # : 1 00:09:24.822 13:40:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.822 13:40:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.822 13:40:56 -- scripts/common.sh@365 -- # decimal 1 00:09:24.822 13:40:56 -- scripts/common.sh@353 -- # local d=1 00:09:24.822 13:40:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.822 13:40:56 -- scripts/common.sh@355 -- # echo 1 00:09:24.822 13:40:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.822 13:40:56 -- scripts/common.sh@366 -- # decimal 2 00:09:24.822 13:40:56 -- scripts/common.sh@353 -- # local d=2 00:09:24.822 13:40:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.822 13:40:56 -- scripts/common.sh@355 -- # echo 2 00:09:24.822 13:40:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.822 13:40:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.822 13:40:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.822 13:40:56 -- scripts/common.sh@368 -- # return 0 00:09:24.822 13:40:56 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.822 13:40:56 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.822 --rc genhtml_branch_coverage=1 00:09:24.822 --rc genhtml_function_coverage=1 00:09:24.822 --rc genhtml_legend=1 00:09:24.822 --rc geninfo_all_blocks=1 00:09:24.822 --rc geninfo_unexecuted_blocks=1 00:09:24.822 00:09:24.822 ' 00:09:24.822 13:40:56 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.822 --rc genhtml_branch_coverage=1 00:09:24.822 --rc genhtml_function_coverage=1 00:09:24.822 --rc genhtml_legend=1 00:09:24.822 --rc geninfo_all_blocks=1 00:09:24.822 --rc geninfo_unexecuted_blocks=1 00:09:24.822 00:09:24.822 ' 00:09:24.822 13:40:56 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.822 --rc genhtml_branch_coverage=1 00:09:24.822 --rc genhtml_function_coverage=1 00:09:24.822 --rc genhtml_legend=1 00:09:24.822 --rc geninfo_all_blocks=1 00:09:24.822 --rc geninfo_unexecuted_blocks=1 00:09:24.822 00:09:24.822 ' 00:09:24.822 13:40:56 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.822 --rc genhtml_branch_coverage=1 00:09:24.822 --rc genhtml_function_coverage=1 00:09:24.822 --rc genhtml_legend=1 00:09:24.822 --rc geninfo_all_blocks=1 00:09:24.822 --rc geninfo_unexecuted_blocks=1 00:09:24.822 00:09:24.822 ' 00:09:24.822 13:40:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.822 13:40:56 -- nvmf/common.sh@7 -- # uname -s 00:09:24.822 13:40:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.822 13:40:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.822 13:40:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.823 13:40:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.823 13:40:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.823 13:40:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.823 13:40:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.823 13:40:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.823 13:40:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.823 13:40:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.823 13:40:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712 00:09:24.823 13:40:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=804400cf-1c42-e711-906e-0012795d9712 00:09:24.823 13:40:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.823 13:40:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.823 13:40:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:24.823 13:40:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.823 13:40:56 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:09:24.823 13:40:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.823 13:40:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.823 13:40:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.823 13:40:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.823 13:40:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.823 13:40:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.823 13:40:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.823 13:40:56 -- paths/export.sh@5 -- # export PATH 00:09:24.823 13:40:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.823 13:40:56 -- nvmf/common.sh@51 -- # : 0 00:09:24.823 13:40:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.823 13:40:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.823 13:40:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.823 13:40:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.823 13:40:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.823 13:40:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.823 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.823 13:40:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.823 13:40:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.823 13:40:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.823 13:40:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:24.823 13:40:56 -- spdk/autotest.sh@32 -- # uname -s 00:09:24.823 13:40:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:24.823 13:40:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:24.823 13:40:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps 00:09:24.823 13:40:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:09:24.823 13:40:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps 00:09:24.823 13:40:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:24.823 13:40:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:24.823 13:40:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:24.823 13:40:56 -- spdk/autotest.sh@48 -- # udevadm_pid=3818308 00:09:24.823 13:40:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:24.823 13:40:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:24.823 13:40:56 -- pm/common@17 -- # local monitor 00:09:24.823 13:40:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.823 13:40:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.823 13:40:56 -- pm/common@21 -- # date +%s 00:09:24.823 13:40:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.823 13:40:56 -- pm/common@21 -- # date +%s 00:09:24.823 13:40:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:24.823 13:40:56 -- pm/common@25 -- # sleep 1 00:09:24.823 13:40:56 -- pm/common@21 -- # date +%s 00:09:24.823 13:40:56 -- pm/common@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402456 00:09:24.823 13:40:56 -- pm/common@21 -- # date +%s 00:09:24.823 13:40:56 -- pm/common@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402456 00:09:24.823 13:40:56 -- pm/common@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402456 00:09:24.823 13:40:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733402456 00:09:24.823 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402456_collect-cpu-load.pm.log 00:09:24.823 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402456_collect-vmstat.pm.log 00:09:24.823 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402456_collect-cpu-temp.pm.log 00:09:24.823 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733402456_collect-bmc-pm.bmc.pm.log 00:09:25.766 13:40:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:25.766 13:40:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:25.766 13:40:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.766 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.766 13:40:57 -- spdk/autotest.sh@59 -- # create_test_list 00:09:25.766 13:40:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:25.766 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:09:25.766 13:40:57 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh 00:09:25.766 13:40:57 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk 00:09:25.766 13:40:57 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:09:25.766 13:40:57 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:09:25.766 13:40:57 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:09:25.766 13:40:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:25.766 13:40:57 -- common/autotest_common.sh@1457 -- # uname 00:09:25.766 13:40:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:25.766 13:40:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:25.766 13:40:57 -- common/autotest_common.sh@1477 -- # uname 00:09:25.766 13:40:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:25.766 13:40:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:25.766 13:40:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:26.024 lcov: LCOV version 1.15 00:09:26.024 13:40:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvme-phy-autotest/spdk -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info 00:09:44.122 /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:44.122 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:10:02.213 13:41:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:02.213 13:41:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.213 13:41:32 -- common/autotest_common.sh@10 -- # set +x 00:10:02.213 13:41:32 -- spdk/autotest.sh@78 -- # rm -f 00:10:02.213 13:41:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:10:04.743 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:10:04.743 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:10:05.680 13:41:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:05.680 13:41:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:05.680 13:41:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:05.680 13:41:37 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:05.680 13:41:37 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:05.680 13:41:37 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:05.680 13:41:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:05.680 13:41:37 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:10:05.680 13:41:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:05.680 13:41:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:05.680 13:41:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:05.680 13:41:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:05.680 13:41:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:05.680 13:41:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:05.680 13:41:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:05.680 13:41:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:05.680 13:41:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:05.680 13:41:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:05.680 13:41:37 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:05.680 No valid GPT data, bailing 00:10:05.940 13:41:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:05.940 13:41:37 -- scripts/common.sh@394 -- # pt= 00:10:05.940 13:41:37 -- scripts/common.sh@395 -- # return 1 00:10:05.940 13:41:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:05.940 1+0 records in 00:10:05.940 1+0 records out 00:10:05.940 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457085 s, 229 MB/s 00:10:05.940 13:41:37 -- spdk/autotest.sh@105 -- # sync 00:10:05.940 13:41:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:05.940 13:41:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:05.940 13:41:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:12.503 13:41:43 -- spdk/autotest.sh@111 -- # uname -s 00:10:12.503 13:41:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:12.503 13:41:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:12.503 13:41:43 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:10:15.100 Hugepages 00:10:15.100 node hugesize free / total 00:10:15.100 node0 1048576kB 0 / 0 00:10:15.100 node0 2048kB 0 / 0 00:10:15.100 node1 1048576kB 0 / 0 00:10:15.100 node1 2048kB 0 / 0 00:10:15.100 00:10:15.100 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:15.100 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:10:15.100 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:10:15.100 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:10:15.100 13:41:46 -- spdk/autotest.sh@117 -- # uname -s 00:10:15.100 13:41:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:15.100 13:41:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:15.100 13:41:46 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:10:18.389 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:18.389 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:18.648 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:18.648 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:18.648 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:21.947 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:10:22.882 13:41:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:10:23.821 13:41:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:10:23.821 13:41:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:10:23.821 13:41:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:23.821 13:41:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:23.821 13:41:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:23.821 13:41:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:23.821 13:41:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:23.821 13:41:55 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:23.821 13:41:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:23.821 13:41:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:10:23.821 13:41:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:10:23.821 13:41:55 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:10:27.134 Waiting for block devices as requested 00:10:27.134 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:27.134 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:27.134 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:27.134 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:27.134 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:27.393 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:27.393 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:27.393 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:27.653 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:27.653 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:27.653 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:27.912 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:27.912 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:27.912 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:28.172 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:28.172 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:28.172 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:10:29.108 13:42:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:29.108 13:42:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:10:29.108 13:42:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:10:29.108 13:42:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:10:29.108 13:42:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:10:29.108 13:42:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:29.108 13:42:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:29.367 13:42:00 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:10:29.367 13:42:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:29.367 13:42:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:29.367 13:42:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:10:29.367 13:42:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:29.367 13:42:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:29.367 13:42:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:29.367 13:42:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:29.367 13:42:00 -- common/autotest_common.sh@1543 -- # continue 00:10:29.367 13:42:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:29.367 13:42:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.367 13:42:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.367 13:42:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:29.367 13:42:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.367 13:42:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.367 13:42:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:10:32.655 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:32.655 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:32.915 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:32.915 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:36.204 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:10:37.164 13:42:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:37.164 13:42:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.164 13:42:08 -- common/autotest_common.sh@10 -- # set +x 00:10:37.164 13:42:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:37.164 13:42:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:37.164 13:42:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:37.164 13:42:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:37.164 13:42:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:37.164 13:42:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:37.164 13:42:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:37.164 13:42:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:37.164 13:42:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:37.164 13:42:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:37.164 13:42:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:37.164 13:42:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:10:37.164 13:42:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:37.428 13:42:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:10:37.428 13:42:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:10:37.428 13:42:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:37.428 13:42:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:10:37.428 13:42:08 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:10:37.428 13:42:08 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:10:37.428 13:42:08 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:10:37.428 13:42:08 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:10:37.428 13:42:08 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:10:37.428 13:42:08 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:10:37.428 13:42:08 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3833765 00:10:37.428 13:42:08 -- common/autotest_common.sh@1585 -- # waitforlisten 3833765 00:10:37.428 13:42:08 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:10:37.428 13:42:08 -- common/autotest_common.sh@835 -- # '[' -z 3833765 ']' 00:10:37.428 13:42:08 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.428 13:42:08 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.428 13:42:08 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.428 13:42:08 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.428 13:42:08 -- common/autotest_common.sh@10 -- # set +x 00:10:37.428 [2024-12-05 13:42:08.777134] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:10:37.428 [2024-12-05 13:42:08.777212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3833765 ] 00:10:37.428 [2024-12-05 13:42:08.900791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.687 [2024-12-05 13:42:08.958243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.687 [2024-12-05 13:42:09.177686] 'OCF_Core' volume operations registered 00:10:37.687 [2024-12-05 13:42:09.177728] 'OCF_Cache' volume operations registered 00:10:37.687 [2024-12-05 13:42:09.182159] 'OCF Composite' volume operations registered 00:10:37.687 [2024-12-05 13:42:09.186649] 'SPDK_block_device' volume operations registered 00:10:37.944 13:42:09 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.944 13:42:09 -- common/autotest_common.sh@868 -- # return 0 00:10:37.944 13:42:09 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:10:37.944 13:42:09 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:10:37.944 13:42:09 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:10:41.231 nvme0n1 00:10:41.231 13:42:12 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:10:41.231 [2024-12-05 13:42:12.728688] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:10:41.231 request: 00:10:41.231 { 00:10:41.231 "nvme_ctrlr_name": "nvme0", 00:10:41.231 "password": "test", 00:10:41.231 "method": "bdev_nvme_opal_revert", 00:10:41.231 "req_id": 1 00:10:41.231 } 00:10:41.231 Got JSON-RPC error response 00:10:41.231 response: 00:10:41.231 { 00:10:41.231 "code": -32602, 00:10:41.231 "message": "Invalid parameters" 00:10:41.231 } 00:10:41.231 13:42:12 -- common/autotest_common.sh@1591 -- # true 00:10:41.231 13:42:12 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:10:41.231 13:42:12 -- common/autotest_common.sh@1595 -- # killprocess 3833765 00:10:41.231 13:42:12 -- common/autotest_common.sh@954 -- # '[' -z 3833765 ']' 00:10:41.231 13:42:12 -- common/autotest_common.sh@958 -- # kill -0 3833765 00:10:41.231 13:42:12 -- common/autotest_common.sh@959 -- # uname 00:10:41.489 13:42:12 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.489 13:42:12 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3833765 00:10:41.489 13:42:12 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.489 13:42:12 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.489 13:42:12 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3833765' 00:10:41.489 killing process with pid 3833765 00:10:41.489 13:42:12 -- common/autotest_common.sh@973 -- # kill 3833765 00:10:41.489 13:42:12 -- common/autotest_common.sh@978 -- # wait 3833765 00:10:45.681 13:42:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:45.681 13:42:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:45.681 13:42:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:45.681 13:42:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:45.681 13:42:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:45.681 13:42:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.681 13:42:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.681 13:42:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:45.681 13:42:16 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh 00:10:45.681 13:42:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.681 13:42:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.681 13:42:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.681 ************************************ 00:10:45.681 START TEST env 00:10:45.681 ************************************ 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh 00:10:45.681 * Looking for test storage... 00:10:45.681 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.681 13:42:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.681 13:42:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.681 13:42:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.681 13:42:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.681 13:42:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.681 13:42:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.681 13:42:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.681 13:42:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.681 13:42:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.681 13:42:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.681 13:42:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.681 13:42:17 env -- scripts/common.sh@344 -- # case "$op" in 00:10:45.681 13:42:17 env -- scripts/common.sh@345 -- # : 1 00:10:45.681 13:42:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.681 13:42:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.681 13:42:17 env -- scripts/common.sh@365 -- # decimal 1 00:10:45.681 13:42:17 env -- scripts/common.sh@353 -- # local d=1 00:10:45.681 13:42:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.681 13:42:17 env -- scripts/common.sh@355 -- # echo 1 00:10:45.681 13:42:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.681 13:42:17 env -- scripts/common.sh@366 -- # decimal 2 00:10:45.681 13:42:17 env -- scripts/common.sh@353 -- # local d=2 00:10:45.681 13:42:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.681 13:42:17 env -- scripts/common.sh@355 -- # echo 2 00:10:45.681 13:42:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.681 13:42:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.681 13:42:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.681 13:42:17 env -- scripts/common.sh@368 -- # return 0 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.681 --rc genhtml_branch_coverage=1 00:10:45.681 --rc genhtml_function_coverage=1 00:10:45.681 --rc genhtml_legend=1 00:10:45.681 --rc geninfo_all_blocks=1 00:10:45.681 --rc geninfo_unexecuted_blocks=1 00:10:45.681 00:10:45.681 ' 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.681 --rc genhtml_branch_coverage=1 00:10:45.681 --rc genhtml_function_coverage=1 00:10:45.681 --rc genhtml_legend=1 00:10:45.681 --rc geninfo_all_blocks=1 00:10:45.681 --rc geninfo_unexecuted_blocks=1 00:10:45.681 00:10:45.681 ' 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.681 --rc genhtml_branch_coverage=1 00:10:45.681 --rc genhtml_function_coverage=1 00:10:45.681 --rc genhtml_legend=1 00:10:45.681 --rc geninfo_all_blocks=1 00:10:45.681 --rc geninfo_unexecuted_blocks=1 00:10:45.681 00:10:45.681 ' 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.681 --rc genhtml_branch_coverage=1 00:10:45.681 --rc genhtml_function_coverage=1 00:10:45.681 --rc genhtml_legend=1 00:10:45.681 --rc geninfo_all_blocks=1 00:10:45.681 --rc geninfo_unexecuted_blocks=1 00:10:45.681 00:10:45.681 ' 00:10:45.681 13:42:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.681 13:42:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.681 13:42:17 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.681 ************************************ 00:10:45.681 START TEST env_memory 00:10:45.681 ************************************ 00:10:45.681 13:42:17 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut 00:10:45.940 00:10:45.940 00:10:45.940 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.940 http://cunit.sourceforge.net/ 00:10:45.940 00:10:45.940 00:10:45.940 Suite: memory 00:10:45.940 Test: alloc and free memory map ...[2024-12-05 13:42:17.243762] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:45.940 passed 00:10:45.940 Test: mem map translation ...[2024-12-05 13:42:17.273019] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:45.940 [2024-12-05 13:42:17.273042] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:45.940 [2024-12-05 13:42:17.273097] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:45.940 [2024-12-05 13:42:17.273111] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:45.940 passed 00:10:45.940 Test: mem map registration ...[2024-12-05 13:42:17.330869] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:45.940 [2024-12-05 13:42:17.330892] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:45.940 passed 00:10:45.940 Test: mem map adjacent registrations ...passed 00:10:45.940 00:10:45.940 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.940 suites 1 1 n/a 0 0 00:10:45.940 tests 4 4 4 0 0 00:10:45.940 asserts 152 152 152 0 n/a 00:10:45.940 00:10:45.940 Elapsed time = 0.198 seconds 00:10:45.940 00:10:45.940 real 0m0.213s 00:10:45.940 user 0m0.203s 00:10:45.940 sys 0m0.009s 00:10:45.940 13:42:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.940 13:42:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:45.940 ************************************ 00:10:45.940 END TEST env_memory 00:10:45.940 ************************************ 00:10:45.940 13:42:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:45.940 13:42:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.940 13:42:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.940 13:42:17 env -- common/autotest_common.sh@10 -- # set +x 00:10:46.198 ************************************ 00:10:46.198 START TEST env_vtophys 00:10:46.198 ************************************ 00:10:46.198 13:42:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys 00:10:46.198 EAL: lib.eal log level changed from notice to debug 00:10:46.198 EAL: Detected lcore 0 as core 0 on socket 0 00:10:46.198 EAL: Detected lcore 1 as core 1 on socket 0 00:10:46.198 EAL: Detected lcore 2 as core 2 on socket 0 00:10:46.198 EAL: Detected lcore 3 as core 3 on socket 0 00:10:46.198 EAL: Detected lcore 4 as core 4 on socket 0 00:10:46.198 EAL: Detected lcore 5 as core 8 on socket 0 00:10:46.198 EAL: Detected lcore 6 as core 9 on socket 0 00:10:46.198 EAL: Detected lcore 7 as core 10 on socket 0 00:10:46.198 EAL: Detected lcore 8 as core 11 on socket 0 00:10:46.198 EAL: Detected lcore 9 as core 16 on socket 0 00:10:46.198 EAL: Detected lcore 10 as core 17 on socket 0 00:10:46.198 EAL: Detected lcore 11 as core 18 on socket 0 00:10:46.198 EAL: Detected lcore 12 as core 19 on socket 0 00:10:46.198 EAL: Detected lcore 13 as core 20 on socket 0 00:10:46.198 EAL: Detected lcore 14 as core 24 on socket 0 00:10:46.198 EAL: Detected lcore 15 as core 25 on socket 0 00:10:46.198 EAL: Detected lcore 16 as core 26 on socket 0 00:10:46.198 EAL: Detected lcore 17 as core 27 on socket 0 00:10:46.198 EAL: Detected lcore 18 as core 0 on socket 1 00:10:46.198 EAL: Detected lcore 19 as core 1 on socket 1 00:10:46.198 EAL: Detected lcore 20 as core 2 on socket 1 00:10:46.198 EAL: Detected lcore 21 as core 3 on socket 1 00:10:46.198 EAL: Detected lcore 22 as core 4 on socket 1 00:10:46.198 EAL: Detected lcore 23 as core 8 on socket 1 00:10:46.198 EAL: Detected lcore 24 as core 9 on socket 1 00:10:46.198 EAL: Detected lcore 25 as core 10 on socket 1 00:10:46.198 EAL: Detected lcore 26 as core 11 on socket 1 00:10:46.198 EAL: Detected lcore 27 as core 16 on socket 1 00:10:46.198 EAL: Detected lcore 28 as core 17 on socket 1 00:10:46.198 EAL: Detected lcore 29 as core 18 on socket 1 00:10:46.198 EAL: Detected lcore 30 as core 19 on socket 1 00:10:46.198 EAL: Detected lcore 31 as core 20 on socket 1 00:10:46.198 EAL: Detected lcore 32 as core 24 on socket 1 00:10:46.198 EAL: Detected lcore 33 as core 25 on socket 1 00:10:46.198 EAL: Detected lcore 34 as core 26 on socket 1 00:10:46.198 EAL: Detected lcore 35 as core 27 on socket 1 00:10:46.198 EAL: Detected lcore 36 as core 0 on socket 0 00:10:46.198 EAL: Detected lcore 37 as core 1 on socket 0 00:10:46.199 EAL: Detected lcore 38 as core 2 on socket 0 00:10:46.199 EAL: Detected lcore 39 as core 3 on socket 0 00:10:46.199 EAL: Detected lcore 40 as core 4 on socket 0 00:10:46.199 EAL: Detected lcore 41 as core 8 on socket 0 00:10:46.199 EAL: Detected lcore 42 as core 9 on socket 0 00:10:46.199 EAL: Detected lcore 43 as core 10 on socket 0 00:10:46.199 EAL: Detected lcore 44 as core 11 on socket 0 00:10:46.199 EAL: Detected lcore 45 as core 16 on socket 0 00:10:46.199 EAL: Detected lcore 46 as core 17 on socket 0 00:10:46.199 EAL: Detected lcore 47 as core 18 on socket 0 00:10:46.199 EAL: Detected lcore 48 as core 19 on socket 0 00:10:46.199 EAL: Detected lcore 49 as core 20 on socket 0 00:10:46.199 EAL: Detected lcore 50 as core 24 on socket 0 00:10:46.199 EAL: Detected lcore 51 as core 25 on socket 0 00:10:46.199 EAL: Detected lcore 52 as core 26 on socket 0 00:10:46.199 EAL: Detected lcore 53 as core 27 on socket 0 00:10:46.199 EAL: Detected lcore 54 as core 0 on socket 1 00:10:46.199 EAL: Detected lcore 55 as core 1 on socket 1 00:10:46.199 EAL: Detected lcore 56 as core 2 on socket 1 00:10:46.199 EAL: Detected lcore 57 as core 3 on socket 1 00:10:46.199 EAL: Detected lcore 58 as core 4 on socket 1 00:10:46.199 EAL: Detected lcore 59 as core 8 on socket 1 00:10:46.199 EAL: Detected lcore 60 as core 9 on socket 1 00:10:46.199 EAL: Detected lcore 61 as core 10 on socket 1 00:10:46.199 EAL: Detected lcore 62 as core 11 on socket 1 00:10:46.199 EAL: Detected lcore 63 as core 16 on socket 1 00:10:46.199 EAL: Detected lcore 64 as core 17 on socket 1 00:10:46.199 EAL: Detected lcore 65 as core 18 on socket 1 00:10:46.199 EAL: Detected lcore 66 as core 19 on socket 1 00:10:46.199 EAL: Detected lcore 67 as core 20 on socket 1 00:10:46.199 EAL: Detected lcore 68 as core 24 on socket 1 00:10:46.199 EAL: Detected lcore 69 as core 25 on socket 1 00:10:46.199 EAL: Detected lcore 70 as core 26 on socket 1 00:10:46.199 EAL: Detected lcore 71 as core 27 on socket 1 00:10:46.199 EAL: Maximum logical cores by configuration: 128 00:10:46.199 EAL: Detected CPU lcores: 72 00:10:46.199 EAL: Detected NUMA nodes: 2 00:10:46.199 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:46.199 EAL: Detected shared linkage of DPDK 00:10:46.199 EAL: No shared files mode enabled, IPC will be disabled 00:10:46.199 EAL: Bus pci wants IOVA as 'DC' 00:10:46.199 EAL: Buses did not request a specific IOVA mode. 00:10:46.199 EAL: IOMMU is available, selecting IOVA as VA mode. 00:10:46.199 EAL: Selected IOVA mode 'VA' 00:10:46.199 EAL: Probing VFIO support... 00:10:46.199 EAL: IOMMU type 1 (Type 1) is supported 00:10:46.199 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:46.199 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:46.199 EAL: VFIO support initialized 00:10:46.199 EAL: Ask a virtual area of 0x2e000 bytes 00:10:46.199 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:46.199 EAL: Setting up physically contiguous memory... 00:10:46.199 EAL: Setting maximum number of open files to 524288 00:10:46.199 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:46.199 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:10:46.199 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:46.199 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:10:46.199 EAL: Ask a virtual area of 0x61000 bytes 00:10:46.199 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:10:46.199 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:10:46.199 EAL: Ask a virtual area of 0x400000000 bytes 00:10:46.199 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:10:46.199 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:10:46.199 EAL: Hugepages will be freed exactly as allocated. 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: TSC frequency is ~2300000 KHz 00:10:46.199 EAL: Main lcore 0 is ready (tid=7f758bc17a00;cpuset=[0]) 00:10:46.199 EAL: Trying to obtain current memory policy. 00:10:46.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.199 EAL: Restoring previous memory policy: 0 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was expanded by 2MB 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:46.199 EAL: Mem event callback 'spdk:(nil)' registered 00:10:46.199 00:10:46.199 00:10:46.199 CUnit - A unit testing framework for C - Version 2.1-3 00:10:46.199 http://cunit.sourceforge.net/ 00:10:46.199 00:10:46.199 00:10:46.199 Suite: components_suite 00:10:46.199 Test: vtophys_malloc_test ...passed 00:10:46.199 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:46.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.199 EAL: Restoring previous memory policy: 4 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was expanded by 4MB 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was shrunk by 4MB 00:10:46.199 EAL: Trying to obtain current memory policy. 00:10:46.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.199 EAL: Restoring previous memory policy: 4 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was expanded by 6MB 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was shrunk by 6MB 00:10:46.199 EAL: Trying to obtain current memory policy. 00:10:46.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.199 EAL: Restoring previous memory policy: 4 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was expanded by 10MB 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.199 EAL: request: mp_malloc_sync 00:10:46.199 EAL: No shared files mode enabled, IPC is disabled 00:10:46.199 EAL: Heap on socket 0 was shrunk by 10MB 00:10:46.199 EAL: Trying to obtain current memory policy. 00:10:46.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.199 EAL: Restoring previous memory policy: 4 00:10:46.199 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was expanded by 18MB 00:10:46.200 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was shrunk by 18MB 00:10:46.200 EAL: Trying to obtain current memory policy. 00:10:46.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.200 EAL: Restoring previous memory policy: 4 00:10:46.200 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was expanded by 34MB 00:10:46.200 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was shrunk by 34MB 00:10:46.200 EAL: Trying to obtain current memory policy. 00:10:46.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.200 EAL: Restoring previous memory policy: 4 00:10:46.200 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was expanded by 66MB 00:10:46.200 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was shrunk by 66MB 00:10:46.200 EAL: Trying to obtain current memory policy. 00:10:46.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.200 EAL: Restoring previous memory policy: 4 00:10:46.200 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.200 EAL: request: mp_malloc_sync 00:10:46.200 EAL: No shared files mode enabled, IPC is disabled 00:10:46.200 EAL: Heap on socket 0 was expanded by 130MB 00:10:46.458 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.458 EAL: request: mp_malloc_sync 00:10:46.458 EAL: No shared files mode enabled, IPC is disabled 00:10:46.458 EAL: Heap on socket 0 was shrunk by 130MB 00:10:46.458 EAL: Trying to obtain current memory policy. 00:10:46.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.458 EAL: Restoring previous memory policy: 4 00:10:46.458 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.458 EAL: request: mp_malloc_sync 00:10:46.458 EAL: No shared files mode enabled, IPC is disabled 00:10:46.458 EAL: Heap on socket 0 was expanded by 258MB 00:10:46.458 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.458 EAL: request: mp_malloc_sync 00:10:46.458 EAL: No shared files mode enabled, IPC is disabled 00:10:46.458 EAL: Heap on socket 0 was shrunk by 258MB 00:10:46.458 EAL: Trying to obtain current memory policy. 00:10:46.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.716 EAL: Restoring previous memory policy: 4 00:10:46.716 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.716 EAL: request: mp_malloc_sync 00:10:46.716 EAL: No shared files mode enabled, IPC is disabled 00:10:46.716 EAL: Heap on socket 0 was expanded by 514MB 00:10:46.716 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.716 EAL: request: mp_malloc_sync 00:10:46.716 EAL: No shared files mode enabled, IPC is disabled 00:10:46.716 EAL: Heap on socket 0 was shrunk by 514MB 00:10:46.716 EAL: Trying to obtain current memory policy. 00:10:46.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.974 EAL: Restoring previous memory policy: 4 00:10:46.974 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.974 EAL: request: mp_malloc_sync 00:10:46.974 EAL: No shared files mode enabled, IPC is disabled 00:10:46.974 EAL: Heap on socket 0 was expanded by 1026MB 00:10:47.233 EAL: Calling mem event callback 'spdk:(nil)' 00:10:47.492 EAL: request: mp_malloc_sync 00:10:47.492 EAL: No shared files mode enabled, IPC is disabled 00:10:47.492 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:47.492 passed 00:10:47.492 00:10:47.492 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.492 suites 1 1 n/a 0 0 00:10:47.492 tests 2 2 2 0 0 00:10:47.492 asserts 497 497 497 0 n/a 00:10:47.492 00:10:47.492 Elapsed time = 1.192 seconds 00:10:47.492 EAL: Calling mem event callback 'spdk:(nil)' 00:10:47.492 EAL: request: mp_malloc_sync 00:10:47.492 EAL: No shared files mode enabled, IPC is disabled 00:10:47.492 EAL: Heap on socket 0 was shrunk by 2MB 00:10:47.492 EAL: No shared files mode enabled, IPC is disabled 00:10:47.492 EAL: No shared files mode enabled, IPC is disabled 00:10:47.492 EAL: No shared files mode enabled, IPC is disabled 00:10:47.492 00:10:47.492 real 0m1.372s 00:10:47.492 user 0m0.781s 00:10:47.492 sys 0m0.559s 00:10:47.492 13:42:18 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.492 13:42:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:47.492 ************************************ 00:10:47.492 END TEST env_vtophys 00:10:47.492 ************************************ 00:10:47.492 13:42:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut 00:10:47.492 13:42:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.492 13:42:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.492 13:42:18 env -- common/autotest_common.sh@10 -- # set +x 00:10:47.492 ************************************ 00:10:47.492 START TEST env_pci 00:10:47.492 ************************************ 00:10:47.492 13:42:18 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut 00:10:47.492 00:10:47.492 00:10:47.492 CUnit - A unit testing framework for C - Version 2.1-3 00:10:47.492 http://cunit.sourceforge.net/ 00:10:47.492 00:10:47.492 00:10:47.492 Suite: pci 00:10:47.492 Test: pci_hook ...[2024-12-05 13:42:18.953196] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3835255 has claimed it 00:10:47.492 EAL: Cannot find device (10000:00:01.0) 00:10:47.492 EAL: Failed to attach device on primary process 00:10:47.492 passed 00:10:47.492 00:10:47.492 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.492 suites 1 1 n/a 0 0 00:10:47.492 tests 1 1 1 0 0 00:10:47.492 asserts 25 25 25 0 n/a 00:10:47.492 00:10:47.492 Elapsed time = 0.035 seconds 00:10:47.492 00:10:47.492 real 0m0.058s 00:10:47.492 user 0m0.019s 00:10:47.492 sys 0m0.039s 00:10:47.492 13:42:18 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.492 13:42:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:47.492 ************************************ 00:10:47.492 END TEST env_pci 00:10:47.492 ************************************ 00:10:47.751 13:42:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:47.751 13:42:19 env -- env/env.sh@15 -- # uname 00:10:47.751 13:42:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:47.751 13:42:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:47.751 13:42:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:47.751 13:42:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.751 13:42:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.751 13:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:10:47.751 ************************************ 00:10:47.751 START TEST env_dpdk_post_init 00:10:47.751 ************************************ 00:10:47.751 13:42:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:47.751 EAL: Detected CPU lcores: 72 00:10:47.751 EAL: Detected NUMA nodes: 2 00:10:47.751 EAL: Detected shared linkage of DPDK 00:10:47.751 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:47.751 EAL: Selected IOVA mode 'VA' 00:10:47.751 EAL: VFIO support initialized 00:10:47.751 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:47.751 EAL: Using IOMMU type 1 (Type 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:10:48.010 EAL: Ignore mapping IO port bar(1) 00:10:48.010 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:10:48.947 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:10:54.218 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:10:54.218 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:10:54.476 Starting DPDK initialization... 00:10:54.476 Starting SPDK post initialization... 00:10:54.476 SPDK NVMe probe 00:10:54.476 Attaching to 0000:d8:00.0 00:10:54.476 Attached to 0000:d8:00.0 00:10:54.476 Cleaning up... 00:10:54.476 00:10:54.476 real 0m6.795s 00:10:54.476 user 0m4.881s 00:10:54.476 sys 0m0.959s 00:10:54.476 13:42:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.476 13:42:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:54.476 ************************************ 00:10:54.476 END TEST env_dpdk_post_init 00:10:54.476 ************************************ 00:10:54.476 13:42:25 env -- env/env.sh@26 -- # uname 00:10:54.476 13:42:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:54.476 13:42:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:54.476 13:42:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.476 13:42:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.476 13:42:25 env -- common/autotest_common.sh@10 -- # set +x 00:10:54.476 ************************************ 00:10:54.476 START TEST env_mem_callbacks 00:10:54.476 ************************************ 00:10:54.476 13:42:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:10:54.476 EAL: Detected CPU lcores: 72 00:10:54.476 EAL: Detected NUMA nodes: 2 00:10:54.476 EAL: Detected shared linkage of DPDK 00:10:54.476 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:54.734 EAL: Selected IOVA mode 'VA' 00:10:54.734 EAL: VFIO support initialized 00:10:54.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:54.734 00:10:54.734 00:10:54.734 CUnit - A unit testing framework for C - Version 2.1-3 00:10:54.734 http://cunit.sourceforge.net/ 00:10:54.734 00:10:54.734 00:10:54.734 Suite: memory 00:10:54.734 Test: test ... 00:10:54.734 register 0x200000200000 2097152 00:10:54.734 malloc 3145728 00:10:54.734 register 0x200000400000 4194304 00:10:54.734 buf 0x200000500000 len 3145728 PASSED 00:10:54.734 malloc 64 00:10:54.734 buf 0x2000004fff40 len 64 PASSED 00:10:54.734 malloc 4194304 00:10:54.734 register 0x200000800000 6291456 00:10:54.734 buf 0x200000a00000 len 4194304 PASSED 00:10:54.734 free 0x200000500000 3145728 00:10:54.734 free 0x2000004fff40 64 00:10:54.734 unregister 0x200000400000 4194304 PASSED 00:10:54.734 free 0x200000a00000 4194304 00:10:54.734 unregister 0x200000800000 6291456 PASSED 00:10:54.734 malloc 8388608 00:10:54.734 register 0x200000400000 10485760 00:10:54.734 buf 0x200000600000 len 8388608 PASSED 00:10:54.734 free 0x200000600000 8388608 00:10:54.734 unregister 0x200000400000 10485760 PASSED 00:10:54.734 passed 00:10:54.734 00:10:54.734 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.734 suites 1 1 n/a 0 0 00:10:54.734 tests 1 1 1 0 0 00:10:54.734 asserts 15 15 15 0 n/a 00:10:54.734 00:10:54.734 Elapsed time = 0.008 seconds 00:10:54.734 00:10:54.734 real 0m0.081s 00:10:54.734 user 0m0.019s 00:10:54.734 sys 0m0.061s 00:10:54.735 13:42:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.735 13:42:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:54.735 ************************************ 00:10:54.735 END TEST env_mem_callbacks 00:10:54.735 ************************************ 00:10:54.735 00:10:54.735 real 0m9.052s 00:10:54.735 user 0m6.109s 00:10:54.735 sys 0m1.986s 00:10:54.735 13:42:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.735 13:42:26 env -- common/autotest_common.sh@10 -- # set +x 00:10:54.735 ************************************ 00:10:54.735 END TEST env 00:10:54.735 ************************************ 00:10:54.735 13:42:26 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh 00:10:54.735 13:42:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.735 13:42:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.735 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:10:54.735 ************************************ 00:10:54.735 START TEST rpc 00:10:54.735 ************************************ 00:10:54.735 13:42:26 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh 00:10:54.994 * Looking for test storage... 00:10:54.994 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.994 13:42:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.994 13:42:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.994 13:42:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.994 13:42:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.994 13:42:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.994 13:42:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:54.994 13:42:26 rpc -- scripts/common.sh@345 -- # : 1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.994 13:42:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.994 13:42:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@353 -- # local d=1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.994 13:42:26 rpc -- scripts/common.sh@355 -- # echo 1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.994 13:42:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@353 -- # local d=2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.994 13:42:26 rpc -- scripts/common.sh@355 -- # echo 2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.994 13:42:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.994 13:42:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.994 13:42:26 rpc -- scripts/common.sh@368 -- # return 0 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.994 --rc genhtml_branch_coverage=1 00:10:54.994 --rc genhtml_function_coverage=1 00:10:54.994 --rc genhtml_legend=1 00:10:54.994 --rc geninfo_all_blocks=1 00:10:54.994 --rc geninfo_unexecuted_blocks=1 00:10:54.994 00:10:54.994 ' 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.994 --rc genhtml_branch_coverage=1 00:10:54.994 --rc genhtml_function_coverage=1 00:10:54.994 --rc genhtml_legend=1 00:10:54.994 --rc geninfo_all_blocks=1 00:10:54.994 --rc geninfo_unexecuted_blocks=1 00:10:54.994 00:10:54.994 ' 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.994 --rc genhtml_branch_coverage=1 00:10:54.994 --rc genhtml_function_coverage=1 00:10:54.994 --rc genhtml_legend=1 00:10:54.994 --rc geninfo_all_blocks=1 00:10:54.994 --rc geninfo_unexecuted_blocks=1 00:10:54.994 00:10:54.994 ' 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:54.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.994 --rc genhtml_branch_coverage=1 00:10:54.994 --rc genhtml_function_coverage=1 00:10:54.994 --rc genhtml_legend=1 00:10:54.994 --rc geninfo_all_blocks=1 00:10:54.994 --rc geninfo_unexecuted_blocks=1 00:10:54.994 00:10:54.994 ' 00:10:54.994 13:42:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3836274 00:10:54.994 13:42:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:54.994 13:42:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:10:54.994 13:42:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3836274 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 3836274 ']' 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.994 13:42:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.995 13:42:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.995 [2024-12-05 13:42:26.390688] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:10:54.995 [2024-12-05 13:42:26.390745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836274 ] 00:10:54.995 [2024-12-05 13:42:26.495511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.253 [2024-12-05 13:42:26.552064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:55.253 [2024-12-05 13:42:26.552108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3836274' to capture a snapshot of events at runtime. 00:10:55.253 [2024-12-05 13:42:26.552123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.253 [2024-12-05 13:42:26.552135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.253 [2024-12-05 13:42:26.552146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3836274 for offline analysis/debug. 00:10:55.253 [2024-12-05 13:42:26.552760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.253 [2024-12-05 13:42:26.770962] 'OCF_Core' volume operations registered 00:10:55.253 [2024-12-05 13:42:26.770995] 'OCF_Cache' volume operations registered 00:10:55.253 [2024-12-05 13:42:26.775405] 'OCF Composite' volume operations registered 00:10:55.512 [2024-12-05 13:42:26.779880] 'SPDK_block_device' volume operations registered 00:10:55.512 13:42:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.512 13:42:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:55.512 13:42:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:10:55.512 13:42:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:10:55.512 13:42:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:55.512 13:42:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:55.512 13:42:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.512 13:42:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.512 13:42:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.512 ************************************ 00:10:55.512 START TEST rpc_integrity 00:10:55.512 ************************************ 00:10:55.512 13:42:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:55.512 13:42:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.512 13:42:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.512 13:42:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.512 13:42:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.512 13:42:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:55.512 13:42:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:55.512 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:55.512 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:55.512 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.512 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:55.771 { 00:10:55.771 "name": "Malloc0", 00:10:55.771 "aliases": [ 00:10:55.771 "a98faae7-a78c-4e50-9d15-9db3bf0a884b" 00:10:55.771 ], 00:10:55.771 "product_name": "Malloc disk", 00:10:55.771 "block_size": 512, 00:10:55.771 "num_blocks": 16384, 00:10:55.771 "uuid": "a98faae7-a78c-4e50-9d15-9db3bf0a884b", 00:10:55.771 "assigned_rate_limits": { 00:10:55.771 "rw_ios_per_sec": 0, 00:10:55.771 "rw_mbytes_per_sec": 0, 00:10:55.771 "r_mbytes_per_sec": 0, 00:10:55.771 "w_mbytes_per_sec": 0 00:10:55.771 }, 00:10:55.771 "claimed": false, 00:10:55.771 "zoned": false, 00:10:55.771 "supported_io_types": { 00:10:55.771 "read": true, 00:10:55.771 "write": true, 00:10:55.771 "unmap": true, 00:10:55.771 "flush": true, 00:10:55.771 "reset": true, 00:10:55.771 "nvme_admin": false, 00:10:55.771 "nvme_io": false, 00:10:55.771 "nvme_io_md": false, 00:10:55.771 "write_zeroes": true, 00:10:55.771 "zcopy": true, 00:10:55.771 "get_zone_info": false, 00:10:55.771 "zone_management": false, 00:10:55.771 "zone_append": false, 00:10:55.771 "compare": false, 00:10:55.771 "compare_and_write": false, 00:10:55.771 "abort": true, 00:10:55.771 "seek_hole": false, 00:10:55.771 "seek_data": false, 00:10:55.771 "copy": true, 00:10:55.771 "nvme_iov_md": false 00:10:55.771 }, 00:10:55.771 "memory_domains": [ 00:10:55.771 { 00:10:55.771 "dma_device_id": "system", 00:10:55.771 "dma_device_type": 1 00:10:55.771 }, 00:10:55.771 { 00:10:55.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.771 "dma_device_type": 2 00:10:55.771 } 00:10:55.771 ], 00:10:55.771 "driver_specific": {} 00:10:55.771 } 00:10:55.771 ]' 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 [2024-12-05 13:42:27.107253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:55.771 [2024-12-05 13:42:27.107292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.771 [2024-12-05 13:42:27.107311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10971c0 00:10:55.771 [2024-12-05 13:42:27.107323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.771 [2024-12-05 13:42:27.108955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.771 [2024-12-05 13:42:27.108984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:55.771 Passthru0 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.771 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:55.771 { 00:10:55.771 "name": "Malloc0", 00:10:55.771 "aliases": [ 00:10:55.771 "a98faae7-a78c-4e50-9d15-9db3bf0a884b" 00:10:55.771 ], 00:10:55.771 "product_name": "Malloc disk", 00:10:55.771 "block_size": 512, 00:10:55.771 "num_blocks": 16384, 00:10:55.771 "uuid": "a98faae7-a78c-4e50-9d15-9db3bf0a884b", 00:10:55.771 "assigned_rate_limits": { 00:10:55.771 "rw_ios_per_sec": 0, 00:10:55.771 "rw_mbytes_per_sec": 0, 00:10:55.771 "r_mbytes_per_sec": 0, 00:10:55.771 "w_mbytes_per_sec": 0 00:10:55.771 }, 00:10:55.771 "claimed": true, 00:10:55.771 "claim_type": "exclusive_write", 00:10:55.771 "zoned": false, 00:10:55.771 "supported_io_types": { 00:10:55.771 "read": true, 00:10:55.771 "write": true, 00:10:55.771 "unmap": true, 00:10:55.771 "flush": true, 00:10:55.771 "reset": true, 00:10:55.771 "nvme_admin": false, 00:10:55.771 "nvme_io": false, 00:10:55.771 "nvme_io_md": false, 00:10:55.771 "write_zeroes": true, 00:10:55.771 "zcopy": true, 00:10:55.771 "get_zone_info": false, 00:10:55.771 "zone_management": false, 00:10:55.771 "zone_append": false, 00:10:55.771 "compare": false, 00:10:55.771 "compare_and_write": false, 00:10:55.771 "abort": true, 00:10:55.771 "seek_hole": false, 00:10:55.771 "seek_data": false, 00:10:55.771 "copy": true, 00:10:55.771 "nvme_iov_md": false 00:10:55.771 }, 00:10:55.771 "memory_domains": [ 00:10:55.771 { 00:10:55.771 "dma_device_id": "system", 00:10:55.771 "dma_device_type": 1 00:10:55.771 }, 00:10:55.771 { 00:10:55.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.771 "dma_device_type": 2 00:10:55.771 } 00:10:55.771 ], 00:10:55.771 "driver_specific": {} 00:10:55.771 }, 00:10:55.771 { 00:10:55.771 "name": "Passthru0", 00:10:55.771 "aliases": [ 00:10:55.771 "8a40658f-2233-5da8-8a19-1c06f59c960e" 00:10:55.771 ], 00:10:55.771 "product_name": "passthru", 00:10:55.771 "block_size": 512, 00:10:55.771 "num_blocks": 16384, 00:10:55.771 "uuid": "8a40658f-2233-5da8-8a19-1c06f59c960e", 00:10:55.772 "assigned_rate_limits": { 00:10:55.772 "rw_ios_per_sec": 0, 00:10:55.772 "rw_mbytes_per_sec": 0, 00:10:55.772 "r_mbytes_per_sec": 0, 00:10:55.772 "w_mbytes_per_sec": 0 00:10:55.772 }, 00:10:55.772 "claimed": false, 00:10:55.772 "zoned": false, 00:10:55.772 "supported_io_types": { 00:10:55.772 "read": true, 00:10:55.772 "write": true, 00:10:55.772 "unmap": true, 00:10:55.772 "flush": true, 00:10:55.772 "reset": true, 00:10:55.772 "nvme_admin": false, 00:10:55.772 "nvme_io": false, 00:10:55.772 "nvme_io_md": false, 00:10:55.772 "write_zeroes": true, 00:10:55.772 "zcopy": true, 00:10:55.772 "get_zone_info": false, 00:10:55.772 "zone_management": false, 00:10:55.772 "zone_append": false, 00:10:55.772 "compare": false, 00:10:55.772 "compare_and_write": false, 00:10:55.772 "abort": true, 00:10:55.772 "seek_hole": false, 00:10:55.772 "seek_data": false, 00:10:55.772 "copy": true, 00:10:55.772 "nvme_iov_md": false 00:10:55.772 }, 00:10:55.772 "memory_domains": [ 00:10:55.772 { 00:10:55.772 "dma_device_id": "system", 00:10:55.772 "dma_device_type": 1 00:10:55.772 }, 00:10:55.772 { 00:10:55.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.772 "dma_device_type": 2 00:10:55.772 } 00:10:55.772 ], 00:10:55.772 "driver_specific": { 00:10:55.772 "passthru": { 00:10:55.772 "name": "Passthru0", 00:10:55.772 "base_bdev_name": "Malloc0" 00:10:55.772 } 00:10:55.772 } 00:10:55.772 } 00:10:55.772 ]' 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:55.772 13:42:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:55.772 00:10:55.772 real 0m0.271s 00:10:55.772 user 0m0.163s 00:10:55.772 sys 0m0.042s 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.772 13:42:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:55.772 ************************************ 00:10:55.772 END TEST rpc_integrity 00:10:55.772 ************************************ 00:10:55.772 13:42:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:55.772 13:42:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.772 13:42:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.772 13:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 ************************************ 00:10:56.030 START TEST rpc_plugins 00:10:56.030 ************************************ 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:56.030 { 00:10:56.030 "name": "Malloc1", 00:10:56.030 "aliases": [ 00:10:56.030 "49fb9e30-7238-4637-a9f5-e9479d64a9dd" 00:10:56.030 ], 00:10:56.030 "product_name": "Malloc disk", 00:10:56.030 "block_size": 4096, 00:10:56.030 "num_blocks": 256, 00:10:56.030 "uuid": "49fb9e30-7238-4637-a9f5-e9479d64a9dd", 00:10:56.030 "assigned_rate_limits": { 00:10:56.030 "rw_ios_per_sec": 0, 00:10:56.030 "rw_mbytes_per_sec": 0, 00:10:56.030 "r_mbytes_per_sec": 0, 00:10:56.030 "w_mbytes_per_sec": 0 00:10:56.030 }, 00:10:56.030 "claimed": false, 00:10:56.030 "zoned": false, 00:10:56.030 "supported_io_types": { 00:10:56.030 "read": true, 00:10:56.030 "write": true, 00:10:56.030 "unmap": true, 00:10:56.030 "flush": true, 00:10:56.030 "reset": true, 00:10:56.030 "nvme_admin": false, 00:10:56.030 "nvme_io": false, 00:10:56.030 "nvme_io_md": false, 00:10:56.030 "write_zeroes": true, 00:10:56.030 "zcopy": true, 00:10:56.030 "get_zone_info": false, 00:10:56.030 "zone_management": false, 00:10:56.030 "zone_append": false, 00:10:56.030 "compare": false, 00:10:56.030 "compare_and_write": false, 00:10:56.030 "abort": true, 00:10:56.030 "seek_hole": false, 00:10:56.030 "seek_data": false, 00:10:56.030 "copy": true, 00:10:56.030 "nvme_iov_md": false 00:10:56.030 }, 00:10:56.030 "memory_domains": [ 00:10:56.030 { 00:10:56.030 "dma_device_id": "system", 00:10:56.030 "dma_device_type": 1 00:10:56.030 }, 00:10:56.030 { 00:10:56.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.030 "dma_device_type": 2 00:10:56.030 } 00:10:56.030 ], 00:10:56.030 "driver_specific": {} 00:10:56.030 } 00:10:56.030 ]' 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:56.030 13:42:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:56.030 00:10:56.030 real 0m0.141s 00:10:56.030 user 0m0.093s 00:10:56.030 sys 0m0.013s 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.030 13:42:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 ************************************ 00:10:56.030 END TEST rpc_plugins 00:10:56.030 ************************************ 00:10:56.030 13:42:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:56.030 13:42:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.030 13:42:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.030 13:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.030 ************************************ 00:10:56.030 START TEST rpc_trace_cmd_test 00:10:56.030 ************************************ 00:10:56.030 13:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:56.030 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:56.289 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3836274", 00:10:56.289 "tpoint_group_mask": "0x8", 00:10:56.289 "iscsi_conn": { 00:10:56.289 "mask": "0x2", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "scsi": { 00:10:56.289 "mask": "0x4", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "bdev": { 00:10:56.289 "mask": "0x8", 00:10:56.289 "tpoint_mask": "0xffffffffffffffff" 00:10:56.289 }, 00:10:56.289 "nvmf_rdma": { 00:10:56.289 "mask": "0x10", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "nvmf_tcp": { 00:10:56.289 "mask": "0x20", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "ftl": { 00:10:56.289 "mask": "0x40", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "blobfs": { 00:10:56.289 "mask": "0x80", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "dsa": { 00:10:56.289 "mask": "0x200", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "thread": { 00:10:56.289 "mask": "0x400", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "nvme_pcie": { 00:10:56.289 "mask": "0x800", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "iaa": { 00:10:56.289 "mask": "0x1000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "nvme_tcp": { 00:10:56.289 "mask": "0x2000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "bdev_nvme": { 00:10:56.289 "mask": "0x4000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "sock": { 00:10:56.289 "mask": "0x8000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "blob": { 00:10:56.289 "mask": "0x10000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "bdev_raid": { 00:10:56.289 "mask": "0x20000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 }, 00:10:56.289 "scheduler": { 00:10:56.289 "mask": "0x40000", 00:10:56.289 "tpoint_mask": "0x0" 00:10:56.289 } 00:10:56.289 }' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:56.289 13:42:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:56.289 00:10:56.289 real 0m0.231s 00:10:56.289 user 0m0.192s 00:10:56.289 sys 0m0.032s 00:10:56.290 13:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.290 13:42:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.290 ************************************ 00:10:56.290 END TEST rpc_trace_cmd_test 00:10:56.290 ************************************ 00:10:56.548 13:42:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:56.548 13:42:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:56.548 13:42:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:56.548 13:42:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.548 13:42:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.548 13:42:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.548 ************************************ 00:10:56.548 START TEST rpc_daemon_integrity 00:10:56.548 ************************************ 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.548 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.549 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:56.549 { 00:10:56.549 "name": "Malloc2", 00:10:56.549 "aliases": [ 00:10:56.549 "63a25a33-24a8-4a23-978d-c6a2d47ab935" 00:10:56.549 ], 00:10:56.549 "product_name": "Malloc disk", 00:10:56.549 "block_size": 512, 00:10:56.549 "num_blocks": 16384, 00:10:56.549 "uuid": "63a25a33-24a8-4a23-978d-c6a2d47ab935", 00:10:56.549 "assigned_rate_limits": { 00:10:56.549 "rw_ios_per_sec": 0, 00:10:56.549 "rw_mbytes_per_sec": 0, 00:10:56.549 "r_mbytes_per_sec": 0, 00:10:56.549 "w_mbytes_per_sec": 0 00:10:56.549 }, 00:10:56.549 "claimed": false, 00:10:56.549 "zoned": false, 00:10:56.549 "supported_io_types": { 00:10:56.549 "read": true, 00:10:56.549 "write": true, 00:10:56.549 "unmap": true, 00:10:56.549 "flush": true, 00:10:56.549 "reset": true, 00:10:56.549 "nvme_admin": false, 00:10:56.549 "nvme_io": false, 00:10:56.549 "nvme_io_md": false, 00:10:56.549 "write_zeroes": true, 00:10:56.549 "zcopy": true, 00:10:56.549 "get_zone_info": false, 00:10:56.549 "zone_management": false, 00:10:56.549 "zone_append": false, 00:10:56.549 "compare": false, 00:10:56.549 "compare_and_write": false, 00:10:56.549 "abort": true, 00:10:56.549 "seek_hole": false, 00:10:56.549 "seek_data": false, 00:10:56.549 "copy": true, 00:10:56.549 "nvme_iov_md": false 00:10:56.549 }, 00:10:56.549 "memory_domains": [ 00:10:56.549 { 00:10:56.549 "dma_device_id": "system", 00:10:56.549 "dma_device_type": 1 00:10:56.549 }, 00:10:56.549 { 00:10:56.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.549 "dma_device_type": 2 00:10:56.549 } 00:10:56.549 ], 00:10:56.549 "driver_specific": {} 00:10:56.549 } 00:10:56.549 ]' 00:10:56.549 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:56.549 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:56.549 13:42:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:56.549 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.549 13:42:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.549 [2024-12-05 13:42:28.006090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:56.549 [2024-12-05 13:42:28.006130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.549 [2024-12-05 13:42:28.006152] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf533c0 00:10:56.549 [2024-12-05 13:42:28.006164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.549 [2024-12-05 13:42:28.007697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.549 [2024-12-05 13:42:28.007727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:56.549 Passthru0 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:56.549 { 00:10:56.549 "name": "Malloc2", 00:10:56.549 "aliases": [ 00:10:56.549 "63a25a33-24a8-4a23-978d-c6a2d47ab935" 00:10:56.549 ], 00:10:56.549 "product_name": "Malloc disk", 00:10:56.549 "block_size": 512, 00:10:56.549 "num_blocks": 16384, 00:10:56.549 "uuid": "63a25a33-24a8-4a23-978d-c6a2d47ab935", 00:10:56.549 "assigned_rate_limits": { 00:10:56.549 "rw_ios_per_sec": 0, 00:10:56.549 "rw_mbytes_per_sec": 0, 00:10:56.549 "r_mbytes_per_sec": 0, 00:10:56.549 "w_mbytes_per_sec": 0 00:10:56.549 }, 00:10:56.549 "claimed": true, 00:10:56.549 "claim_type": "exclusive_write", 00:10:56.549 "zoned": false, 00:10:56.549 "supported_io_types": { 00:10:56.549 "read": true, 00:10:56.549 "write": true, 00:10:56.549 "unmap": true, 00:10:56.549 "flush": true, 00:10:56.549 "reset": true, 00:10:56.549 "nvme_admin": false, 00:10:56.549 "nvme_io": false, 00:10:56.549 "nvme_io_md": false, 00:10:56.549 "write_zeroes": true, 00:10:56.549 "zcopy": true, 00:10:56.549 "get_zone_info": false, 00:10:56.549 "zone_management": false, 00:10:56.549 "zone_append": false, 00:10:56.549 "compare": false, 00:10:56.549 "compare_and_write": false, 00:10:56.549 "abort": true, 00:10:56.549 "seek_hole": false, 00:10:56.549 "seek_data": false, 00:10:56.549 "copy": true, 00:10:56.549 "nvme_iov_md": false 00:10:56.549 }, 00:10:56.549 "memory_domains": [ 00:10:56.549 { 00:10:56.549 "dma_device_id": "system", 00:10:56.549 "dma_device_type": 1 00:10:56.549 }, 00:10:56.549 { 00:10:56.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.549 "dma_device_type": 2 00:10:56.549 } 00:10:56.549 ], 00:10:56.549 "driver_specific": {} 00:10:56.549 }, 00:10:56.549 { 00:10:56.549 "name": "Passthru0", 00:10:56.549 "aliases": [ 00:10:56.549 "da1bb6dd-238a-50b2-8bb4-ea897c086ab4" 00:10:56.549 ], 00:10:56.549 "product_name": "passthru", 00:10:56.549 "block_size": 512, 00:10:56.549 "num_blocks": 16384, 00:10:56.549 "uuid": "da1bb6dd-238a-50b2-8bb4-ea897c086ab4", 00:10:56.549 "assigned_rate_limits": { 00:10:56.549 "rw_ios_per_sec": 0, 00:10:56.549 "rw_mbytes_per_sec": 0, 00:10:56.549 "r_mbytes_per_sec": 0, 00:10:56.549 "w_mbytes_per_sec": 0 00:10:56.549 }, 00:10:56.549 "claimed": false, 00:10:56.549 "zoned": false, 00:10:56.549 "supported_io_types": { 00:10:56.549 "read": true, 00:10:56.549 "write": true, 00:10:56.549 "unmap": true, 00:10:56.549 "flush": true, 00:10:56.549 "reset": true, 00:10:56.549 "nvme_admin": false, 00:10:56.549 "nvme_io": false, 00:10:56.549 "nvme_io_md": false, 00:10:56.549 "write_zeroes": true, 00:10:56.549 "zcopy": true, 00:10:56.549 "get_zone_info": false, 00:10:56.549 "zone_management": false, 00:10:56.549 "zone_append": false, 00:10:56.549 "compare": false, 00:10:56.549 "compare_and_write": false, 00:10:56.549 "abort": true, 00:10:56.549 "seek_hole": false, 00:10:56.549 "seek_data": false, 00:10:56.549 "copy": true, 00:10:56.549 "nvme_iov_md": false 00:10:56.549 }, 00:10:56.549 "memory_domains": [ 00:10:56.549 { 00:10:56.549 "dma_device_id": "system", 00:10:56.549 "dma_device_type": 1 00:10:56.549 }, 00:10:56.549 { 00:10:56.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.549 "dma_device_type": 2 00:10:56.549 } 00:10:56.549 ], 00:10:56.549 "driver_specific": { 00:10:56.549 "passthru": { 00:10:56.549 "name": "Passthru0", 00:10:56.549 "base_bdev_name": "Malloc2" 00:10:56.549 } 00:10:56.549 } 00:10:56.549 } 00:10:56.549 ]' 00:10:56.549 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:56.808 00:10:56.808 real 0m0.293s 00:10:56.808 user 0m0.182s 00:10:56.808 sys 0m0.038s 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.808 13:42:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:56.808 ************************************ 00:10:56.808 END TEST rpc_daemon_integrity 00:10:56.808 ************************************ 00:10:56.808 13:42:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:56.808 13:42:28 rpc -- rpc/rpc.sh@84 -- # killprocess 3836274 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 3836274 ']' 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@958 -- # kill -0 3836274 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@959 -- # uname 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3836274 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3836274' 00:10:56.808 killing process with pid 3836274 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@973 -- # kill 3836274 00:10:56.808 13:42:28 rpc -- common/autotest_common.sh@978 -- # wait 3836274 00:10:57.376 00:10:57.376 real 0m2.634s 00:10:57.376 user 0m3.102s 00:10:57.376 sys 0m0.944s 00:10:57.376 13:42:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.376 13:42:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 ************************************ 00:10:57.376 END TEST rpc 00:10:57.376 ************************************ 00:10:57.376 13:42:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:57.376 13:42:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.376 13:42:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.376 13:42:28 -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 ************************************ 00:10:57.376 START TEST skip_rpc 00:10:57.376 ************************************ 00:10:57.376 13:42:28 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:10:57.635 * Looking for test storage... 00:10:57.635 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:10:57.635 13:42:28 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.635 13:42:28 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.635 13:42:28 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.635 13:42:29 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.635 13:42:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.636 13:42:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.636 --rc genhtml_branch_coverage=1 00:10:57.636 --rc genhtml_function_coverage=1 00:10:57.636 --rc genhtml_legend=1 00:10:57.636 --rc geninfo_all_blocks=1 00:10:57.636 --rc geninfo_unexecuted_blocks=1 00:10:57.636 00:10:57.636 ' 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.636 --rc genhtml_branch_coverage=1 00:10:57.636 --rc genhtml_function_coverage=1 00:10:57.636 --rc genhtml_legend=1 00:10:57.636 --rc geninfo_all_blocks=1 00:10:57.636 --rc geninfo_unexecuted_blocks=1 00:10:57.636 00:10:57.636 ' 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.636 --rc genhtml_branch_coverage=1 00:10:57.636 --rc genhtml_function_coverage=1 00:10:57.636 --rc genhtml_legend=1 00:10:57.636 --rc geninfo_all_blocks=1 00:10:57.636 --rc geninfo_unexecuted_blocks=1 00:10:57.636 00:10:57.636 ' 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.636 --rc genhtml_branch_coverage=1 00:10:57.636 --rc genhtml_function_coverage=1 00:10:57.636 --rc genhtml_legend=1 00:10:57.636 --rc geninfo_all_blocks=1 00:10:57.636 --rc geninfo_unexecuted_blocks=1 00:10:57.636 00:10:57.636 ' 00:10:57.636 13:42:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:10:57.636 13:42:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/log.txt 00:10:57.636 13:42:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.636 13:42:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.636 ************************************ 00:10:57.636 START TEST skip_rpc 00:10:57.636 ************************************ 00:10:57.636 13:42:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:57.636 13:42:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3836815 00:10:57.636 13:42:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:57.636 13:42:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:57.636 13:42:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:57.636 [2024-12-05 13:42:29.140446] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:10:57.636 [2024-12-05 13:42:29.140511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836815 ] 00:10:57.895 [2024-12-05 13:42:29.260538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.895 [2024-12-05 13:42:29.315229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.154 [2024-12-05 13:42:29.536765] 'OCF_Core' volume operations registered 00:10:58.154 [2024-12-05 13:42:29.536806] 'OCF_Cache' volume operations registered 00:10:58.154 [2024-12-05 13:42:29.541212] 'OCF Composite' volume operations registered 00:10:58.154 [2024-12-05 13:42:29.545672] 'SPDK_block_device' volume operations registered 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3836815 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3836815 ']' 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3836815 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3836815 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3836815' 00:11:03.428 killing process with pid 3836815 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3836815 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3836815 00:11:03.428 00:11:03.428 real 0m5.613s 00:11:03.428 user 0m5.173s 00:11:03.428 sys 0m0.477s 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.428 13:42:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.428 ************************************ 00:11:03.429 END TEST skip_rpc 00:11:03.429 ************************************ 00:11:03.429 13:42:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:03.429 13:42:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.429 13:42:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.429 13:42:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.429 ************************************ 00:11:03.429 START TEST skip_rpc_with_json 00:11:03.429 ************************************ 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3837702 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3837702 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3837702 ']' 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.429 13:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:03.429 [2024-12-05 13:42:34.830122] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:03.429 [2024-12-05 13:42:34.830176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837702 ] 00:11:03.429 [2024-12-05 13:42:34.937194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.687 [2024-12-05 13:42:34.992201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.687 [2024-12-05 13:42:35.200736] 'OCF_Core' volume operations registered 00:11:03.687 [2024-12-05 13:42:35.200777] 'OCF_Cache' volume operations registered 00:11:03.687 [2024-12-05 13:42:35.205212] 'OCF Composite' volume operations registered 00:11:03.687 [2024-12-05 13:42:35.209698] 'SPDK_block_device' volume operations registered 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:03.945 [2024-12-05 13:42:35.381016] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:03.945 request: 00:11:03.945 { 00:11:03.945 "trtype": "tcp", 00:11:03.945 "method": "nvmf_get_transports", 00:11:03.945 "req_id": 1 00:11:03.945 } 00:11:03.945 Got JSON-RPC error response 00:11:03.945 response: 00:11:03.945 { 00:11:03.945 "code": -19, 00:11:03.945 "message": "No such device" 00:11:03.945 } 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:03.945 [2024-12-05 13:42:35.393158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.945 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:11:04.205 { 00:11:04.205 "subsystems": [ 00:11:04.205 { 00:11:04.205 "subsystem": "fsdev", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "fsdev_set_opts", 00:11:04.205 "params": { 00:11:04.205 "fsdev_io_pool_size": 65535, 00:11:04.205 "fsdev_io_cache_size": 256 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "keyring", 00:11:04.205 "config": [] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "iobuf", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "iobuf_set_options", 00:11:04.205 "params": { 00:11:04.205 "small_pool_count": 8192, 00:11:04.205 "large_pool_count": 1024, 00:11:04.205 "small_bufsize": 8192, 00:11:04.205 "large_bufsize": 135168, 00:11:04.205 "enable_numa": false 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "sock", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "sock_set_default_impl", 00:11:04.205 "params": { 00:11:04.205 "impl_name": "posix" 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "sock_impl_set_options", 00:11:04.205 "params": { 00:11:04.205 "impl_name": "ssl", 00:11:04.205 "recv_buf_size": 4096, 00:11:04.205 "send_buf_size": 4096, 00:11:04.205 "enable_recv_pipe": true, 00:11:04.205 "enable_quickack": false, 00:11:04.205 "enable_placement_id": 0, 00:11:04.205 "enable_zerocopy_send_server": true, 00:11:04.205 "enable_zerocopy_send_client": false, 00:11:04.205 "zerocopy_threshold": 0, 00:11:04.205 "tls_version": 0, 00:11:04.205 "enable_ktls": false 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "sock_impl_set_options", 00:11:04.205 "params": { 00:11:04.205 "impl_name": "posix", 00:11:04.205 "recv_buf_size": 2097152, 00:11:04.205 "send_buf_size": 2097152, 00:11:04.205 "enable_recv_pipe": true, 00:11:04.205 "enable_quickack": false, 00:11:04.205 "enable_placement_id": 0, 00:11:04.205 "enable_zerocopy_send_server": true, 00:11:04.205 "enable_zerocopy_send_client": false, 00:11:04.205 "zerocopy_threshold": 0, 00:11:04.205 "tls_version": 0, 00:11:04.205 "enable_ktls": false 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "vmd", 00:11:04.205 "config": [] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "accel", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "accel_set_options", 00:11:04.205 "params": { 00:11:04.205 "small_cache_size": 128, 00:11:04.205 "large_cache_size": 16, 00:11:04.205 "task_count": 2048, 00:11:04.205 "sequence_count": 2048, 00:11:04.205 "buf_count": 2048 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "bdev", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "bdev_set_options", 00:11:04.205 "params": { 00:11:04.205 "bdev_io_pool_size": 65535, 00:11:04.205 "bdev_io_cache_size": 256, 00:11:04.205 "bdev_auto_examine": true, 00:11:04.205 "iobuf_small_cache_size": 128, 00:11:04.205 "iobuf_large_cache_size": 16 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "bdev_raid_set_options", 00:11:04.205 "params": { 00:11:04.205 "process_window_size_kb": 1024, 00:11:04.205 "process_max_bandwidth_mb_sec": 0 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "bdev_iscsi_set_options", 00:11:04.205 "params": { 00:11:04.205 "timeout_sec": 30 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "bdev_nvme_set_options", 00:11:04.205 "params": { 00:11:04.205 "action_on_timeout": "none", 00:11:04.205 "timeout_us": 0, 00:11:04.205 "timeout_admin_us": 0, 00:11:04.205 "keep_alive_timeout_ms": 10000, 00:11:04.205 "arbitration_burst": 0, 00:11:04.205 "low_priority_weight": 0, 00:11:04.205 "medium_priority_weight": 0, 00:11:04.205 "high_priority_weight": 0, 00:11:04.205 "nvme_adminq_poll_period_us": 10000, 00:11:04.205 "nvme_ioq_poll_period_us": 0, 00:11:04.205 "io_queue_requests": 0, 00:11:04.205 "delay_cmd_submit": true, 00:11:04.205 "transport_retry_count": 4, 00:11:04.205 "bdev_retry_count": 3, 00:11:04.205 "transport_ack_timeout": 0, 00:11:04.205 "ctrlr_loss_timeout_sec": 0, 00:11:04.205 "reconnect_delay_sec": 0, 00:11:04.205 "fast_io_fail_timeout_sec": 0, 00:11:04.205 "disable_auto_failback": false, 00:11:04.205 "generate_uuids": false, 00:11:04.205 "transport_tos": 0, 00:11:04.205 "nvme_error_stat": false, 00:11:04.205 "rdma_srq_size": 0, 00:11:04.205 "io_path_stat": false, 00:11:04.205 "allow_accel_sequence": false, 00:11:04.205 "rdma_max_cq_size": 0, 00:11:04.205 "rdma_cm_event_timeout_ms": 0, 00:11:04.205 "dhchap_digests": [ 00:11:04.205 "sha256", 00:11:04.205 "sha384", 00:11:04.205 "sha512" 00:11:04.205 ], 00:11:04.205 "dhchap_dhgroups": [ 00:11:04.205 "null", 00:11:04.205 "ffdhe2048", 00:11:04.205 "ffdhe3072", 00:11:04.205 "ffdhe4096", 00:11:04.205 "ffdhe6144", 00:11:04.205 "ffdhe8192" 00:11:04.205 ] 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "bdev_nvme_set_hotplug", 00:11:04.205 "params": { 00:11:04.205 "period_us": 100000, 00:11:04.205 "enable": false 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "bdev_wait_for_examine" 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "scsi", 00:11:04.205 "config": null 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "scheduler", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "framework_set_scheduler", 00:11:04.205 "params": { 00:11:04.205 "name": "static" 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "vhost_scsi", 00:11:04.205 "config": [] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "vhost_blk", 00:11:04.205 "config": [] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "ublk", 00:11:04.205 "config": [] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "nbd", 00:11:04.205 "config": [] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "nvmf", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "nvmf_set_config", 00:11:04.205 "params": { 00:11:04.205 "discovery_filter": "match_any", 00:11:04.205 "admin_cmd_passthru": { 00:11:04.205 "identify_ctrlr": false 00:11:04.205 }, 00:11:04.205 "dhchap_digests": [ 00:11:04.205 "sha256", 00:11:04.205 "sha384", 00:11:04.205 "sha512" 00:11:04.205 ], 00:11:04.205 "dhchap_dhgroups": [ 00:11:04.205 "null", 00:11:04.205 "ffdhe2048", 00:11:04.205 "ffdhe3072", 00:11:04.205 "ffdhe4096", 00:11:04.205 "ffdhe6144", 00:11:04.205 "ffdhe8192" 00:11:04.205 ] 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "nvmf_set_max_subsystems", 00:11:04.205 "params": { 00:11:04.205 "max_subsystems": 1024 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "nvmf_set_crdt", 00:11:04.205 "params": { 00:11:04.205 "crdt1": 0, 00:11:04.205 "crdt2": 0, 00:11:04.205 "crdt3": 0 00:11:04.205 } 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "method": "nvmf_create_transport", 00:11:04.205 "params": { 00:11:04.205 "trtype": "TCP", 00:11:04.205 "max_queue_depth": 128, 00:11:04.205 "max_io_qpairs_per_ctrlr": 127, 00:11:04.205 "in_capsule_data_size": 4096, 00:11:04.205 "max_io_size": 131072, 00:11:04.205 "io_unit_size": 131072, 00:11:04.205 "max_aq_depth": 128, 00:11:04.205 "num_shared_buffers": 511, 00:11:04.205 "buf_cache_size": 4294967295, 00:11:04.205 "dif_insert_or_strip": false, 00:11:04.205 "zcopy": false, 00:11:04.205 "c2h_success": true, 00:11:04.205 "sock_priority": 0, 00:11:04.205 "abort_timeout_sec": 1, 00:11:04.205 "ack_timeout": 0, 00:11:04.205 "data_wr_pool_size": 0 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 }, 00:11:04.205 { 00:11:04.205 "subsystem": "iscsi", 00:11:04.205 "config": [ 00:11:04.205 { 00:11:04.205 "method": "iscsi_set_options", 00:11:04.205 "params": { 00:11:04.205 "node_base": "iqn.2016-06.io.spdk", 00:11:04.205 "max_sessions": 128, 00:11:04.205 "max_connections_per_session": 2, 00:11:04.205 "max_queue_depth": 64, 00:11:04.205 "default_time2wait": 2, 00:11:04.205 "default_time2retain": 20, 00:11:04.205 "first_burst_length": 8192, 00:11:04.205 "immediate_data": true, 00:11:04.205 "allow_duplicated_isid": false, 00:11:04.205 "error_recovery_level": 0, 00:11:04.205 "nop_timeout": 60, 00:11:04.205 "nop_in_interval": 30, 00:11:04.205 "disable_chap": false, 00:11:04.205 "require_chap": false, 00:11:04.205 "mutual_chap": false, 00:11:04.205 "chap_group": 0, 00:11:04.205 "max_large_datain_per_connection": 64, 00:11:04.205 "max_r2t_per_connection": 4, 00:11:04.205 "pdu_pool_size": 36864, 00:11:04.205 "immediate_data_pool_size": 16384, 00:11:04.205 "data_out_pool_size": 2048 00:11:04.205 } 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 } 00:11:04.205 ] 00:11:04.205 } 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3837702 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3837702 ']' 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3837702 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837702 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837702' 00:11:04.205 killing process with pid 3837702 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3837702 00:11:04.205 13:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3837702 00:11:04.773 13:42:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3837882 00:11:04.773 13:42:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:04.773 13:42:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3837882 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3837882 ']' 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3837882 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837882 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837882' 00:11:10.042 killing process with pid 3837882 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3837882 00:11:10.042 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3837882 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/log.txt 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/log.txt 00:11:10.300 00:11:10.300 real 0m6.955s 00:11:10.300 user 0m6.367s 00:11:10.300 sys 0m0.935s 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:10.300 ************************************ 00:11:10.300 END TEST skip_rpc_with_json 00:11:10.300 ************************************ 00:11:10.300 13:42:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:10.300 13:42:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.300 13:42:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.300 13:42:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.300 ************************************ 00:11:10.300 START TEST skip_rpc_with_delay 00:11:10.300 ************************************ 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:11:10.300 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:10.559 [2024-12-05 13:42:41.861517] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:10.559 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:10.559 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:10.559 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:10.559 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:10.559 00:11:10.559 real 0m0.082s 00:11:10.559 user 0m0.045s 00:11:10.559 sys 0m0.036s 00:11:10.559 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.559 13:42:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:10.559 ************************************ 00:11:10.559 END TEST skip_rpc_with_delay 00:11:10.559 ************************************ 00:11:10.559 13:42:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:10.559 13:42:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:10.559 13:42:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:10.559 13:42:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.559 13:42:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.559 13:42:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.559 ************************************ 00:11:10.559 START TEST exit_on_failed_rpc_init 00:11:10.559 ************************************ 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3838647 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3838647 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3838647 ']' 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.559 13:42:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:10.559 [2024-12-05 13:42:42.019585] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:10.559 [2024-12-05 13:42:42.019658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838647 ] 00:11:10.817 [2024-12-05 13:42:42.140121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.817 [2024-12-05 13:42:42.196951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.074 [2024-12-05 13:42:42.414943] 'OCF_Core' volume operations registered 00:11:11.074 [2024-12-05 13:42:42.414981] 'OCF_Cache' volume operations registered 00:11:11.074 [2024-12-05 13:42:42.419388] 'OCF Composite' volume operations registered 00:11:11.074 [2024-12-05 13:42:42.423852] 'SPDK_block_device' volume operations registered 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:11.074 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:11:11.332 [2024-12-05 13:42:42.636225] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:11.332 [2024-12-05 13:42:42.636279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838715 ] 00:11:11.332 [2024-12-05 13:42:42.713180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.332 [2024-12-05 13:42:42.758578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.332 [2024-12-05 13:42:42.758654] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:11.332 [2024-12-05 13:42:42.758666] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:11.332 [2024-12-05 13:42:42.758675] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3838647 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3838647 ']' 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3838647 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.332 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3838647 00:11:11.626 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.626 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.627 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3838647' 00:11:11.627 killing process with pid 3838647 00:11:11.627 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3838647 00:11:11.627 13:42:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3838647 00:11:11.916 00:11:11.916 real 0m1.443s 00:11:11.916 user 0m1.411s 00:11:11.916 sys 0m0.557s 00:11:11.916 13:42:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.916 13:42:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:11.916 ************************************ 00:11:11.916 END TEST exit_on_failed_rpc_init 00:11:11.916 ************************************ 00:11:12.233 13:42:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:11:12.233 00:11:12.233 real 0m14.579s 00:11:12.233 user 0m13.192s 00:11:12.233 sys 0m2.330s 00:11:12.233 13:42:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.233 13:42:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 ************************************ 00:11:12.233 END TEST skip_rpc 00:11:12.233 ************************************ 00:11:12.233 13:42:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:11:12.233 13:42:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.233 13:42:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.233 13:42:43 -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 ************************************ 00:11:12.233 START TEST rpc_client 00:11:12.233 ************************************ 00:11:12.233 13:42:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:11:12.233 * Looking for test storage... 00:11:12.233 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client 00:11:12.233 13:42:43 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.233 13:42:43 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.233 13:42:43 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.233 13:42:43 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.233 13:42:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.510 13:42:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:11:12.510 OK 00:11:12.510 13:42:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:12.510 00:11:12.510 real 0m0.216s 00:11:12.510 user 0m0.127s 00:11:12.510 sys 0m0.104s 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.510 13:42:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:12.510 ************************************ 00:11:12.510 END TEST rpc_client 00:11:12.510 ************************************ 00:11:12.510 13:42:43 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh 00:11:12.510 13:42:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.510 13:42:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.510 13:42:43 -- common/autotest_common.sh@10 -- # set +x 00:11:12.510 ************************************ 00:11:12.510 START TEST json_config 00:11:12.510 ************************************ 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.510 13:42:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.510 13:42:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.510 13:42:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.510 13:42:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.510 13:42:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.510 13:42:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:12.510 13:42:43 json_config -- scripts/common.sh@345 -- # : 1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.510 13:42:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.510 13:42:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@353 -- # local d=1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.510 13:42:43 json_config -- scripts/common.sh@355 -- # echo 1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.510 13:42:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@353 -- # local d=2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.510 13:42:43 json_config -- scripts/common.sh@355 -- # echo 2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.510 13:42:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.510 13:42:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.510 13:42:43 json_config -- scripts/common.sh@368 -- # return 0 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.510 --rc genhtml_branch_coverage=1 00:11:12.510 --rc genhtml_function_coverage=1 00:11:12.510 --rc genhtml_legend=1 00:11:12.510 --rc geninfo_all_blocks=1 00:11:12.510 --rc geninfo_unexecuted_blocks=1 00:11:12.510 00:11:12.510 ' 00:11:12.510 13:42:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.510 13:42:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=804400cf-1c42-e711-906e-0012795d9712 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.510 13:42:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:11:12.510 13:42:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.510 13:42:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.510 13:42:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.510 13:42:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.511 13:42:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.511 13:42:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.511 13:42:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.511 13:42:44 json_config -- paths/export.sh@5 -- # export PATH 00:11:12.511 13:42:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@51 -- # : 0 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.511 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.511 13:42:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.511 13:42:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/common.sh 00:11:12.770 13:42:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:12.770 13:42:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:12.770 13:42:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:12.770 13:42:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:12.770 13:42:44 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:12.770 WARNING: No tests are enabled so not running JSON configuration tests 00:11:12.770 13:42:44 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:12.770 00:11:12.770 real 0m0.211s 00:11:12.770 user 0m0.135s 00:11:12.770 sys 0m0.084s 00:11:12.770 13:42:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.770 13:42:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:12.770 ************************************ 00:11:12.770 END TEST json_config 00:11:12.770 ************************************ 00:11:12.770 13:42:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:11:12.770 13:42:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:12.770 13:42:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.770 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:11:12.770 ************************************ 00:11:12.770 START TEST json_config_extra_key 00:11:12.770 ************************************ 00:11:12.770 13:42:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:11:12.770 13:42:44 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.770 13:42:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.770 13:42:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.770 13:42:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.770 13:42:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:13.029 13:42:44 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.029 13:42:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.029 --rc genhtml_branch_coverage=1 00:11:13.029 --rc genhtml_function_coverage=1 00:11:13.029 --rc genhtml_legend=1 00:11:13.029 --rc geninfo_all_blocks=1 00:11:13.029 --rc geninfo_unexecuted_blocks=1 00:11:13.029 00:11:13.029 ' 00:11:13.029 13:42:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.029 --rc genhtml_branch_coverage=1 00:11:13.029 --rc genhtml_function_coverage=1 00:11:13.029 --rc genhtml_legend=1 00:11:13.029 --rc geninfo_all_blocks=1 00:11:13.029 --rc geninfo_unexecuted_blocks=1 00:11:13.029 00:11:13.029 ' 00:11:13.029 13:42:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.029 --rc genhtml_branch_coverage=1 00:11:13.029 --rc genhtml_function_coverage=1 00:11:13.029 --rc genhtml_legend=1 00:11:13.029 --rc geninfo_all_blocks=1 00:11:13.029 --rc geninfo_unexecuted_blocks=1 00:11:13.029 00:11:13.029 ' 00:11:13.029 13:42:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.029 --rc genhtml_branch_coverage=1 00:11:13.029 --rc genhtml_function_coverage=1 00:11:13.029 --rc genhtml_legend=1 00:11:13.029 --rc geninfo_all_blocks=1 00:11:13.029 --rc geninfo_unexecuted_blocks=1 00:11:13.029 00:11:13.029 ' 00:11:13.029 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=804400cf-1c42-e711-906e-0012795d9712 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.029 13:42:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.029 13:42:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.029 13:42:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.029 13:42:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.029 13:42:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:13.029 13:42:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.029 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.029 13:42:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.029 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/common.sh 00:11:13.029 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json') 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:13.030 INFO: launching applications... 00:11:13.030 13:42:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3839172 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:13.030 Waiting for target to run... 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3839172 /var/tmp/spdk_tgt.sock 00:11:13.030 13:42:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3839172 ']' 00:11:13.030 13:42:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json 00:11:13.030 13:42:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:13.030 13:42:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.030 13:42:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:13.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:13.030 13:42:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.030 13:42:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:13.030 [2024-12-05 13:42:44.405929] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:13.030 [2024-12-05 13:42:44.406002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839172 ] 00:11:13.598 [2024-12-05 13:42:44.990573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.598 [2024-12-05 13:42:45.050462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.856 [2024-12-05 13:42:45.131066] 'OCF_Core' volume operations registered 00:11:13.856 [2024-12-05 13:42:45.131093] 'OCF_Cache' volume operations registered 00:11:13.856 [2024-12-05 13:42:45.134125] 'OCF Composite' volume operations registered 00:11:13.856 [2024-12-05 13:42:45.137192] 'SPDK_block_device' volume operations registered 00:11:13.856 13:42:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.856 13:42:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:13.856 00:11:13.856 13:42:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:13.856 INFO: shutting down applications... 00:11:13.856 13:42:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3839172 ]] 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3839172 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3839172 00:11:13.856 13:42:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:14.423 13:42:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:14.423 13:42:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:14.423 13:42:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3839172 00:11:14.423 13:42:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3839172 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:14.992 13:42:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:14.992 SPDK target shutdown done 00:11:14.992 13:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:14.992 Success 00:11:14.992 00:11:14.992 real 0m2.113s 00:11:14.992 user 0m1.205s 00:11:14.992 sys 0m0.775s 00:11:14.992 13:42:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.992 13:42:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:14.992 ************************************ 00:11:14.992 END TEST json_config_extra_key 00:11:14.992 ************************************ 00:11:14.992 13:42:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:14.992 13:42:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.992 13:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.992 13:42:46 -- common/autotest_common.sh@10 -- # set +x 00:11:14.992 ************************************ 00:11:14.992 START TEST alias_rpc 00:11:14.992 ************************************ 00:11:14.992 13:42:46 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:14.992 * Looking for test storage... 00:11:14.992 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc 00:11:14.992 13:42:46 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.992 13:42:46 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.992 13:42:46 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.992 13:42:46 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:14.992 13:42:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.252 13:42:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.252 --rc genhtml_branch_coverage=1 00:11:15.252 --rc genhtml_function_coverage=1 00:11:15.252 --rc genhtml_legend=1 00:11:15.252 --rc geninfo_all_blocks=1 00:11:15.252 --rc geninfo_unexecuted_blocks=1 00:11:15.252 00:11:15.252 ' 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.252 --rc genhtml_branch_coverage=1 00:11:15.252 --rc genhtml_function_coverage=1 00:11:15.252 --rc genhtml_legend=1 00:11:15.252 --rc geninfo_all_blocks=1 00:11:15.252 --rc geninfo_unexecuted_blocks=1 00:11:15.252 00:11:15.252 ' 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.252 --rc genhtml_branch_coverage=1 00:11:15.252 --rc genhtml_function_coverage=1 00:11:15.252 --rc genhtml_legend=1 00:11:15.252 --rc geninfo_all_blocks=1 00:11:15.252 --rc geninfo_unexecuted_blocks=1 00:11:15.252 00:11:15.252 ' 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.252 --rc genhtml_branch_coverage=1 00:11:15.252 --rc genhtml_function_coverage=1 00:11:15.252 --rc genhtml_legend=1 00:11:15.252 --rc geninfo_all_blocks=1 00:11:15.252 --rc geninfo_unexecuted_blocks=1 00:11:15.252 00:11:15.252 ' 00:11:15.252 13:42:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:15.252 13:42:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3839429 00:11:15.252 13:42:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3839429 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3839429 ']' 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.252 13:42:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.252 13:42:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:15.252 [2024-12-05 13:42:46.582740] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:15.252 [2024-12-05 13:42:46.582818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839429 ] 00:11:15.252 [2024-12-05 13:42:46.705067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.252 [2024-12-05 13:42:46.760464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.532 [2024-12-05 13:42:46.985517] 'OCF_Core' volume operations registered 00:11:15.532 [2024-12-05 13:42:46.985555] 'OCF_Cache' volume operations registered 00:11:15.532 [2024-12-05 13:42:46.989970] 'OCF Composite' volume operations registered 00:11:15.532 [2024-12-05 13:42:46.994409] 'SPDK_block_device' volume operations registered 00:11:15.791 13:42:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.791 13:42:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:15.791 13:42:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py load_config -i 00:11:16.050 13:42:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3839429 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3839429 ']' 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3839429 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3839429 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3839429' 00:11:16.050 killing process with pid 3839429 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 3839429 00:11:16.050 13:42:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 3839429 00:11:16.618 00:11:16.618 real 0m1.732s 00:11:16.618 user 0m1.666s 00:11:16.618 sys 0m0.638s 00:11:16.618 13:42:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.618 13:42:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.618 ************************************ 00:11:16.618 END TEST alias_rpc 00:11:16.618 ************************************ 00:11:16.618 13:42:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:16.618 13:42:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh 00:11:16.618 13:42:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.618 13:42:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.618 13:42:48 -- common/autotest_common.sh@10 -- # set +x 00:11:16.877 ************************************ 00:11:16.877 START TEST spdkcli_tcp 00:11:16.877 ************************************ 00:11:16.877 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh 00:11:16.877 * Looking for test storage... 00:11:16.877 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli 00:11:16.877 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.877 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.877 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.877 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.877 13:42:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:16.878 13:42:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.878 13:42:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.878 13:42:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.878 13:42:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.878 --rc genhtml_branch_coverage=1 00:11:16.878 --rc genhtml_function_coverage=1 00:11:16.878 --rc genhtml_legend=1 00:11:16.878 --rc geninfo_all_blocks=1 00:11:16.878 --rc geninfo_unexecuted_blocks=1 00:11:16.878 00:11:16.878 ' 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.878 --rc genhtml_branch_coverage=1 00:11:16.878 --rc genhtml_function_coverage=1 00:11:16.878 --rc genhtml_legend=1 00:11:16.878 --rc geninfo_all_blocks=1 00:11:16.878 --rc geninfo_unexecuted_blocks=1 00:11:16.878 00:11:16.878 ' 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.878 --rc genhtml_branch_coverage=1 00:11:16.878 --rc genhtml_function_coverage=1 00:11:16.878 --rc genhtml_legend=1 00:11:16.878 --rc geninfo_all_blocks=1 00:11:16.878 --rc geninfo_unexecuted_blocks=1 00:11:16.878 00:11:16.878 ' 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.878 --rc genhtml_branch_coverage=1 00:11:16.878 --rc genhtml_function_coverage=1 00:11:16.878 --rc genhtml_legend=1 00:11:16.878 --rc geninfo_all_blocks=1 00:11:16.878 --rc geninfo_unexecuted_blocks=1 00:11:16.878 00:11:16.878 ' 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/common.sh 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/clear_config.py 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3839817 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3839817 00:11:16.878 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3839817 ']' 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.878 13:42:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.878 [2024-12-05 13:42:48.352050] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:16.878 [2024-12-05 13:42:48.352114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839817 ] 00:11:17.137 [2024-12-05 13:42:48.458827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.137 [2024-12-05 13:42:48.515811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.137 [2024-12-05 13:42:48.515818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.396 [2024-12-05 13:42:48.729029] 'OCF_Core' volume operations registered 00:11:17.396 [2024-12-05 13:42:48.729072] 'OCF_Cache' volume operations registered 00:11:17.396 [2024-12-05 13:42:48.733463] 'OCF Composite' volume operations registered 00:11:17.396 [2024-12-05 13:42:48.737908] 'SPDK_block_device' volume operations registered 00:11:17.396 13:42:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.396 13:42:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:11:17.396 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3839832 00:11:17.396 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:17.396 13:42:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:17.656 [ 00:11:17.656 "bdev_malloc_delete", 00:11:17.656 "bdev_malloc_create", 00:11:17.656 "bdev_null_resize", 00:11:17.656 "bdev_null_delete", 00:11:17.656 "bdev_null_create", 00:11:17.656 "bdev_nvme_cuse_unregister", 00:11:17.656 "bdev_nvme_cuse_register", 00:11:17.656 "bdev_opal_new_user", 00:11:17.656 "bdev_opal_set_lock_state", 00:11:17.656 "bdev_opal_delete", 00:11:17.656 "bdev_opal_get_info", 00:11:17.656 "bdev_opal_create", 00:11:17.656 "bdev_nvme_opal_revert", 00:11:17.656 "bdev_nvme_opal_init", 00:11:17.656 "bdev_nvme_send_cmd", 00:11:17.656 "bdev_nvme_set_keys", 00:11:17.656 "bdev_nvme_get_path_iostat", 00:11:17.656 "bdev_nvme_get_mdns_discovery_info", 00:11:17.656 "bdev_nvme_stop_mdns_discovery", 00:11:17.656 "bdev_nvme_start_mdns_discovery", 00:11:17.656 "bdev_nvme_set_multipath_policy", 00:11:17.656 "bdev_nvme_set_preferred_path", 00:11:17.656 "bdev_nvme_get_io_paths", 00:11:17.656 "bdev_nvme_remove_error_injection", 00:11:17.656 "bdev_nvme_add_error_injection", 00:11:17.656 "bdev_nvme_get_discovery_info", 00:11:17.656 "bdev_nvme_stop_discovery", 00:11:17.656 "bdev_nvme_start_discovery", 00:11:17.656 "bdev_nvme_get_controller_health_info", 00:11:17.656 "bdev_nvme_disable_controller", 00:11:17.656 "bdev_nvme_enable_controller", 00:11:17.656 "bdev_nvme_reset_controller", 00:11:17.656 "bdev_nvme_get_transport_statistics", 00:11:17.656 "bdev_nvme_apply_firmware", 00:11:17.656 "bdev_nvme_detach_controller", 00:11:17.656 "bdev_nvme_get_controllers", 00:11:17.656 "bdev_nvme_attach_controller", 00:11:17.656 "bdev_nvme_set_hotplug", 00:11:17.656 "bdev_nvme_set_options", 00:11:17.656 "bdev_passthru_delete", 00:11:17.656 "bdev_passthru_create", 00:11:17.656 "bdev_lvol_set_parent_bdev", 00:11:17.656 "bdev_lvol_set_parent", 00:11:17.656 "bdev_lvol_check_shallow_copy", 00:11:17.656 "bdev_lvol_start_shallow_copy", 00:11:17.656 "bdev_lvol_grow_lvstore", 00:11:17.656 "bdev_lvol_get_lvols", 00:11:17.656 "bdev_lvol_get_lvstores", 00:11:17.656 "bdev_lvol_delete", 00:11:17.656 "bdev_lvol_set_read_only", 00:11:17.656 "bdev_lvol_resize", 00:11:17.656 "bdev_lvol_decouple_parent", 00:11:17.656 "bdev_lvol_inflate", 00:11:17.656 "bdev_lvol_rename", 00:11:17.656 "bdev_lvol_clone_bdev", 00:11:17.656 "bdev_lvol_clone", 00:11:17.656 "bdev_lvol_snapshot", 00:11:17.656 "bdev_lvol_create", 00:11:17.656 "bdev_lvol_delete_lvstore", 00:11:17.656 "bdev_lvol_rename_lvstore", 00:11:17.656 "bdev_lvol_create_lvstore", 00:11:17.656 "bdev_raid_set_options", 00:11:17.656 "bdev_raid_remove_base_bdev", 00:11:17.656 "bdev_raid_add_base_bdev", 00:11:17.656 "bdev_raid_delete", 00:11:17.656 "bdev_raid_create", 00:11:17.656 "bdev_raid_get_bdevs", 00:11:17.656 "bdev_error_inject_error", 00:11:17.656 "bdev_error_delete", 00:11:17.656 "bdev_error_create", 00:11:17.656 "bdev_split_delete", 00:11:17.656 "bdev_split_create", 00:11:17.656 "bdev_delay_delete", 00:11:17.656 "bdev_delay_create", 00:11:17.656 "bdev_delay_update_latency", 00:11:17.656 "bdev_zone_block_delete", 00:11:17.656 "bdev_zone_block_create", 00:11:17.656 "blobfs_create", 00:11:17.656 "blobfs_detect", 00:11:17.656 "blobfs_set_cache_size", 00:11:17.656 "bdev_ocf_flush_status", 00:11:17.656 "bdev_ocf_flush_start", 00:11:17.656 "bdev_ocf_set_seqcutoff", 00:11:17.656 "bdev_ocf_set_cache_mode", 00:11:17.656 "bdev_ocf_get_bdevs", 00:11:17.656 "bdev_ocf_reset_stats", 00:11:17.656 "bdev_ocf_get_stats", 00:11:17.656 "bdev_ocf_delete", 00:11:17.656 "bdev_ocf_create", 00:11:17.656 "bdev_aio_delete", 00:11:17.656 "bdev_aio_rescan", 00:11:17.656 "bdev_aio_create", 00:11:17.656 "bdev_ftl_set_property", 00:11:17.656 "bdev_ftl_get_properties", 00:11:17.656 "bdev_ftl_get_stats", 00:11:17.656 "bdev_ftl_unmap", 00:11:17.656 "bdev_ftl_unload", 00:11:17.656 "bdev_ftl_delete", 00:11:17.656 "bdev_ftl_load", 00:11:17.656 "bdev_ftl_create", 00:11:17.656 "bdev_virtio_attach_controller", 00:11:17.656 "bdev_virtio_scsi_get_devices", 00:11:17.657 "bdev_virtio_detach_controller", 00:11:17.657 "bdev_virtio_blk_set_hotplug", 00:11:17.657 "bdev_iscsi_delete", 00:11:17.657 "bdev_iscsi_create", 00:11:17.657 "bdev_iscsi_set_options", 00:11:17.657 "accel_error_inject_error", 00:11:17.657 "ioat_scan_accel_module", 00:11:17.657 "dsa_scan_accel_module", 00:11:17.657 "iaa_scan_accel_module", 00:11:17.657 "keyring_file_remove_key", 00:11:17.657 "keyring_file_add_key", 00:11:17.657 "keyring_linux_set_options", 00:11:17.657 "fsdev_aio_delete", 00:11:17.657 "fsdev_aio_create", 00:11:17.657 "iscsi_get_histogram", 00:11:17.657 "iscsi_enable_histogram", 00:11:17.657 "iscsi_set_options", 00:11:17.657 "iscsi_get_auth_groups", 00:11:17.657 "iscsi_auth_group_remove_secret", 00:11:17.657 "iscsi_auth_group_add_secret", 00:11:17.657 "iscsi_delete_auth_group", 00:11:17.657 "iscsi_create_auth_group", 00:11:17.657 "iscsi_set_discovery_auth", 00:11:17.657 "iscsi_get_options", 00:11:17.657 "iscsi_target_node_request_logout", 00:11:17.657 "iscsi_target_node_set_redirect", 00:11:17.657 "iscsi_target_node_set_auth", 00:11:17.657 "iscsi_target_node_add_lun", 00:11:17.657 "iscsi_get_stats", 00:11:17.657 "iscsi_get_connections", 00:11:17.657 "iscsi_portal_group_set_auth", 00:11:17.657 "iscsi_start_portal_group", 00:11:17.657 "iscsi_delete_portal_group", 00:11:17.657 "iscsi_create_portal_group", 00:11:17.657 "iscsi_get_portal_groups", 00:11:17.657 "iscsi_delete_target_node", 00:11:17.657 "iscsi_target_node_remove_pg_ig_maps", 00:11:17.657 "iscsi_target_node_add_pg_ig_maps", 00:11:17.657 "iscsi_create_target_node", 00:11:17.657 "iscsi_get_target_nodes", 00:11:17.657 "iscsi_delete_initiator_group", 00:11:17.657 "iscsi_initiator_group_remove_initiators", 00:11:17.657 "iscsi_initiator_group_add_initiators", 00:11:17.657 "iscsi_create_initiator_group", 00:11:17.657 "iscsi_get_initiator_groups", 00:11:17.657 "nvmf_set_crdt", 00:11:17.657 "nvmf_set_config", 00:11:17.657 "nvmf_set_max_subsystems", 00:11:17.657 "nvmf_stop_mdns_prr", 00:11:17.657 "nvmf_publish_mdns_prr", 00:11:17.657 "nvmf_subsystem_get_listeners", 00:11:17.657 "nvmf_subsystem_get_qpairs", 00:11:17.657 "nvmf_subsystem_get_controllers", 00:11:17.657 "nvmf_get_stats", 00:11:17.657 "nvmf_get_transports", 00:11:17.657 "nvmf_create_transport", 00:11:17.657 "nvmf_get_targets", 00:11:17.657 "nvmf_delete_target", 00:11:17.657 "nvmf_create_target", 00:11:17.657 "nvmf_subsystem_allow_any_host", 00:11:17.657 "nvmf_subsystem_set_keys", 00:11:17.657 "nvmf_subsystem_remove_host", 00:11:17.657 "nvmf_subsystem_add_host", 00:11:17.657 "nvmf_ns_remove_host", 00:11:17.657 "nvmf_ns_add_host", 00:11:17.657 "nvmf_subsystem_remove_ns", 00:11:17.657 "nvmf_subsystem_set_ns_ana_group", 00:11:17.657 "nvmf_subsystem_add_ns", 00:11:17.657 "nvmf_subsystem_listener_set_ana_state", 00:11:17.657 "nvmf_discovery_get_referrals", 00:11:17.657 "nvmf_discovery_remove_referral", 00:11:17.657 "nvmf_discovery_add_referral", 00:11:17.657 "nvmf_subsystem_remove_listener", 00:11:17.657 "nvmf_subsystem_add_listener", 00:11:17.657 "nvmf_delete_subsystem", 00:11:17.657 "nvmf_create_subsystem", 00:11:17.657 "nvmf_get_subsystems", 00:11:17.657 "env_dpdk_get_mem_stats", 00:11:17.657 "nbd_get_disks", 00:11:17.657 "nbd_stop_disk", 00:11:17.657 "nbd_start_disk", 00:11:17.657 "ublk_recover_disk", 00:11:17.657 "ublk_get_disks", 00:11:17.657 "ublk_stop_disk", 00:11:17.657 "ublk_start_disk", 00:11:17.657 "ublk_destroy_target", 00:11:17.657 "ublk_create_target", 00:11:17.657 "virtio_blk_create_transport", 00:11:17.657 "virtio_blk_get_transports", 00:11:17.657 "vhost_controller_set_coalescing", 00:11:17.657 "vhost_get_controllers", 00:11:17.657 "vhost_delete_controller", 00:11:17.657 "vhost_create_blk_controller", 00:11:17.657 "vhost_scsi_controller_remove_target", 00:11:17.657 "vhost_scsi_controller_add_target", 00:11:17.657 "vhost_start_scsi_controller", 00:11:17.657 "vhost_create_scsi_controller", 00:11:17.657 "thread_set_cpumask", 00:11:17.657 "scheduler_set_options", 00:11:17.657 "framework_get_governor", 00:11:17.657 "framework_get_scheduler", 00:11:17.657 "framework_set_scheduler", 00:11:17.657 "framework_get_reactors", 00:11:17.657 "thread_get_io_channels", 00:11:17.657 "thread_get_pollers", 00:11:17.657 "thread_get_stats", 00:11:17.657 "framework_monitor_context_switch", 00:11:17.657 "spdk_kill_instance", 00:11:17.657 "log_enable_timestamps", 00:11:17.657 "log_get_flags", 00:11:17.657 "log_clear_flag", 00:11:17.657 "log_set_flag", 00:11:17.657 "log_get_level", 00:11:17.657 "log_set_level", 00:11:17.657 "log_get_print_level", 00:11:17.657 "log_set_print_level", 00:11:17.657 "framework_enable_cpumask_locks", 00:11:17.657 "framework_disable_cpumask_locks", 00:11:17.657 "framework_wait_init", 00:11:17.657 "framework_start_init", 00:11:17.657 "scsi_get_devices", 00:11:17.657 "bdev_get_histogram", 00:11:17.657 "bdev_enable_histogram", 00:11:17.657 "bdev_set_qos_limit", 00:11:17.657 "bdev_set_qd_sampling_period", 00:11:17.657 "bdev_get_bdevs", 00:11:17.657 "bdev_reset_iostat", 00:11:17.657 "bdev_get_iostat", 00:11:17.657 "bdev_examine", 00:11:17.657 "bdev_wait_for_examine", 00:11:17.657 "bdev_set_options", 00:11:17.657 "accel_get_stats", 00:11:17.657 "accel_set_options", 00:11:17.657 "accel_set_driver", 00:11:17.657 "accel_crypto_key_destroy", 00:11:17.657 "accel_crypto_keys_get", 00:11:17.657 "accel_crypto_key_create", 00:11:17.657 "accel_assign_opc", 00:11:17.657 "accel_get_module_info", 00:11:17.657 "accel_get_opc_assignments", 00:11:17.657 "vmd_rescan", 00:11:17.657 "vmd_remove_device", 00:11:17.657 "vmd_enable", 00:11:17.657 "sock_get_default_impl", 00:11:17.657 "sock_set_default_impl", 00:11:17.657 "sock_impl_set_options", 00:11:17.657 "sock_impl_get_options", 00:11:17.657 "iobuf_get_stats", 00:11:17.657 "iobuf_set_options", 00:11:17.657 "keyring_get_keys", 00:11:17.657 "framework_get_pci_devices", 00:11:17.657 "framework_get_config", 00:11:17.657 "framework_get_subsystems", 00:11:17.657 "fsdev_set_opts", 00:11:17.657 "fsdev_get_opts", 00:11:17.657 "trace_get_info", 00:11:17.657 "trace_get_tpoint_group_mask", 00:11:17.657 "trace_disable_tpoint_group", 00:11:17.657 "trace_enable_tpoint_group", 00:11:17.657 "trace_clear_tpoint_mask", 00:11:17.657 "trace_set_tpoint_mask", 00:11:17.657 "notify_get_notifications", 00:11:17.657 "notify_get_types", 00:11:17.657 "spdk_get_version", 00:11:17.657 "rpc_get_methods" 00:11:17.657 ] 00:11:17.657 13:42:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:17.657 13:42:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.657 13:42:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.917 13:42:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:17.917 13:42:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3839817 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3839817 ']' 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3839817 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3839817 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3839817' 00:11:17.917 killing process with pid 3839817 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3839817 00:11:17.917 13:42:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3839817 00:11:18.485 00:11:18.485 real 0m1.669s 00:11:18.485 user 0m2.897s 00:11:18.485 sys 0m0.638s 00:11:18.485 13:42:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.485 13:42:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.485 ************************************ 00:11:18.485 END TEST spdkcli_tcp 00:11:18.485 ************************************ 00:11:18.485 13:42:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:18.485 13:42:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.485 13:42:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.485 13:42:49 -- common/autotest_common.sh@10 -- # set +x 00:11:18.485 ************************************ 00:11:18.485 START TEST dpdk_mem_utility 00:11:18.485 ************************************ 00:11:18.485 13:42:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:18.485 * Looking for test storage... 00:11:18.485 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility 00:11:18.485 13:42:49 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.485 13:42:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.485 13:42:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.744 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.744 13:42:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.745 13:42:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.745 --rc genhtml_branch_coverage=1 00:11:18.745 --rc genhtml_function_coverage=1 00:11:18.745 --rc genhtml_legend=1 00:11:18.745 --rc geninfo_all_blocks=1 00:11:18.745 --rc geninfo_unexecuted_blocks=1 00:11:18.745 00:11:18.745 ' 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.745 --rc genhtml_branch_coverage=1 00:11:18.745 --rc genhtml_function_coverage=1 00:11:18.745 --rc genhtml_legend=1 00:11:18.745 --rc geninfo_all_blocks=1 00:11:18.745 --rc geninfo_unexecuted_blocks=1 00:11:18.745 00:11:18.745 ' 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.745 --rc genhtml_branch_coverage=1 00:11:18.745 --rc genhtml_function_coverage=1 00:11:18.745 --rc genhtml_legend=1 00:11:18.745 --rc geninfo_all_blocks=1 00:11:18.745 --rc geninfo_unexecuted_blocks=1 00:11:18.745 00:11:18.745 ' 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.745 --rc genhtml_branch_coverage=1 00:11:18.745 --rc genhtml_function_coverage=1 00:11:18.745 --rc genhtml_legend=1 00:11:18.745 --rc geninfo_all_blocks=1 00:11:18.745 --rc geninfo_unexecuted_blocks=1 00:11:18.745 00:11:18.745 ' 00:11:18.745 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:11:18.745 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3840078 00:11:18.745 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3840078 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3840078 ']' 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.745 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:18.745 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:18.745 [2024-12-05 13:42:50.111995] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:18.745 [2024-12-05 13:42:50.112076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840078 ] 00:11:18.745 [2024-12-05 13:42:50.233126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.004 [2024-12-05 13:42:50.290912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.004 [2024-12-05 13:42:50.512740] 'OCF_Core' volume operations registered 00:11:19.004 [2024-12-05 13:42:50.512777] 'OCF_Cache' volume operations registered 00:11:19.004 [2024-12-05 13:42:50.517224] 'OCF Composite' volume operations registered 00:11:19.004 [2024-12-05 13:42:50.521673] 'SPDK_block_device' volume operations registered 00:11:19.264 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.264 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:19.264 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:19.264 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:19.264 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.264 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:19.264 { 00:11:19.264 "filename": "/tmp/spdk_mem_dump.txt" 00:11:19.264 } 00:11:19.264 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.264 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:11:19.264 DPDK memory size 1200.000000 MiB in 1 heap(s) 00:11:19.264 1 heaps totaling size 1200.000000 MiB 00:11:19.264 size: 1200.000000 MiB heap id: 0 00:11:19.264 end heaps---------- 00:11:19.264 26 mempools totaling size 958.039612 MiB 00:11:19.264 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:19.264 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:19.264 size: 100.555481 MiB name: bdev_io_3840078 00:11:19.264 size: 76.286926 MiB name: ocf_env_12:ocf_mio_8 00:11:19.264 size: 58.218811 MiB name: ocf_env_8:ocf_req_128 00:11:19.264 size: 50.003479 MiB name: msgpool_3840078 00:11:19.264 size: 40.142639 MiB name: ocf_env_11:ocf_mio_4 00:11:19.264 size: 36.509338 MiB name: fsdev_io_3840078 00:11:19.264 size: 34.164612 MiB name: ocf_env_7:ocf_req_64 00:11:19.264 size: 22.138245 MiB name: ocf_env_6:ocf_req_32 00:11:19.264 size: 22.138245 MiB name: ocf_env_10:ocf_mio_2 00:11:19.264 size: 21.763794 MiB name: PDU_Pool 00:11:19.264 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:19.264 size: 16.136780 MiB name: ocf_env_5:ocf_req_16 00:11:19.264 size: 14.136292 MiB name: ocf_env_4:ocf_req_8 00:11:19.264 size: 14.136292 MiB name: ocf_env_9:ocf_mio_1 00:11:19.264 size: 12.136414 MiB name: ocf_env_3:ocf_req_4 00:11:19.264 size: 10.135315 MiB name: ocf_env_1:ocf_req_1 00:11:19.264 size: 10.135315 MiB name: ocf_env_2:ocf_req_2 00:11:19.264 size: 10.135315 MiB name: ocf_env_16:OCF Composit 00:11:19.264 size: 10.135315 MiB name: ocf_env_17:SPDK_block_d 00:11:19.264 size: 4.133484 MiB name: evtpool_3840078 00:11:19.264 size: 1.609375 MiB name: ocf_env_15:ocf_mio_64 00:11:19.264 size: 1.310547 MiB name: ocf_env_14:ocf_mio_32 00:11:19.264 size: 1.161133 MiB name: ocf_env_13:ocf_mio_16 00:11:19.264 size: 0.026123 MiB name: Session_Pool 00:11:19.264 end mempools------- 00:11:19.264 6 memzones totaling size 4.142822 MiB 00:11:19.264 size: 1.000366 MiB name: RG_ring_0_3840078 00:11:19.264 size: 1.000366 MiB name: RG_ring_1_3840078 00:11:19.264 size: 1.000366 MiB name: RG_ring_4_3840078 00:11:19.264 size: 1.000366 MiB name: RG_ring_5_3840078 00:11:19.264 size: 0.125366 MiB name: RG_ring_2_3840078 00:11:19.264 size: 0.015991 MiB name: RG_ring_3_3840078 00:11:19.264 end memzones------- 00:11:19.264 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:11:19.525 heap id: 0 total size: 1200.000000 MiB number of busy elements: 119 number of free elements: 46 00:11:19.525 list of free elements. size: 38.574463 MiB 00:11:19.525 element at address: 0x200030a00000 with size: 0.999878 MiB 00:11:19.525 element at address: 0x200030e00000 with size: 0.999329 MiB 00:11:19.525 element at address: 0x200019200000 with size: 0.998718 MiB 00:11:19.525 element at address: 0x200000400000 with size: 0.998535 MiB 00:11:19.525 element at address: 0x200030000000 with size: 0.997742 MiB 00:11:19.525 element at address: 0x200019400000 with size: 0.997375 MiB 00:11:19.525 element at address: 0x200019e00000 with size: 0.997375 MiB 00:11:19.525 element at address: 0x20002f400000 with size: 0.997192 MiB 00:11:19.525 element at address: 0x20001b400000 with size: 0.996399 MiB 00:11:19.525 element at address: 0x200024e00000 with size: 0.996399 MiB 00:11:19.525 element at address: 0x20001a800000 with size: 0.996277 MiB 00:11:19.525 element at address: 0x20001c400000 with size: 0.995911 MiB 00:11:19.525 element at address: 0x20001d600000 with size: 0.994446 MiB 00:11:19.525 element at address: 0x200025e00000 with size: 0.994446 MiB 00:11:19.525 element at address: 0x200049e00000 with size: 0.994446 MiB 00:11:19.525 element at address: 0x200027600000 with size: 0.990051 MiB 00:11:19.525 element at address: 0x20001ee00000 with size: 0.968079 MiB 00:11:19.525 element at address: 0x20003fc00000 with size: 0.959961 MiB 00:11:19.525 element at address: 0x200030c00000 with size: 0.936584 MiB 00:11:19.525 element at address: 0x200021200000 with size: 0.913635 MiB 00:11:19.525 element at address: 0x20001c200000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x20001d400000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x20001ec00000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x200021000000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x200024c00000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x200025c00000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x200027400000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x200029e00000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x20002f200000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x20002fe00000 with size: 0.866211 MiB 00:11:19.525 element at address: 0x200006400000 with size: 0.866089 MiB 00:11:19.525 element at address: 0x20000a600000 with size: 0.866089 MiB 00:11:19.525 element at address: 0x200003e00000 with size: 0.857300 MiB 00:11:19.525 element at address: 0x20002a000000 with size: 0.845764 MiB 00:11:19.525 element at address: 0x20002ec00000 with size: 0.837769 MiB 00:11:19.525 element at address: 0x200012c00000 with size: 0.811157 MiB 00:11:19.525 element at address: 0x200000200000 with size: 0.717346 MiB 00:11:19.525 element at address: 0x20002ee00000 with size: 0.688354 MiB 00:11:19.525 element at address: 0x200032800000 with size: 0.582886 MiB 00:11:19.525 element at address: 0x200000c00000 with size: 0.495422 MiB 00:11:19.525 element at address: 0x200031000000 with size: 0.490845 MiB 00:11:19.525 element at address: 0x200049c00000 with size: 0.490845 MiB 00:11:19.525 element at address: 0x200031200000 with size: 0.485657 MiB 00:11:19.525 element at address: 0x20003fe00000 with size: 0.410034 MiB 00:11:19.525 element at address: 0x20002f000000 with size: 0.388977 MiB 00:11:19.525 element at address: 0x200000800000 with size: 0.355042 MiB 00:11:19.525 list of standard malloc elements. size: 199.232849 MiB 00:11:19.525 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:11:19.525 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:11:19.525 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:19.525 element at address: 0x200030afff80 with size: 1.000122 MiB 00:11:19.525 element at address: 0x200030cfff80 with size: 1.000122 MiB 00:11:19.525 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:19.525 element at address: 0x200030ceff00 with size: 0.062622 MiB 00:11:19.525 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:19.525 element at address: 0x2000192ffd40 with size: 0.000549 MiB 00:11:19.525 element at address: 0x200030cefdc0 with size: 0.000305 MiB 00:11:19.525 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:11:19.525 element at address: 0x2000212e9fc0 with size: 0.000244 MiB 00:11:19.525 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20000085b040 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20000085f300 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20000087f680 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200000cff000 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200003efb980 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200012cefc80 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000192ffac0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000192ffb80 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000194ff540 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000194ff600 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000194ff6c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200019eff540 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200019eff600 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200019eff6c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001a8ff0c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001a8ff180 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001a8ff240 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001b4ff140 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001b4ff200 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001b4ff2c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001c2fde00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001c4fef40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001c4ff000 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001c4ff0c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001d4fde00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001d6fe940 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001d6fea00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001d6feac0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001ecfde00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001eef7d40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001eef7e00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x20001eef7ec0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000210fde00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000212e9e40 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000212e9f00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x2000212ea0c0 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200024cfde00 with size: 0.000183 MiB 00:11:19.525 element at address: 0x200024eff140 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200024eff200 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200024eff2c0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200025cfde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200025efe940 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200025efea00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200025efeac0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000274fde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000276fd740 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000276fd800 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000276fd8c0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200029efde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002a0d8840 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002a0d8900 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002a0d89c0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002ecd6780 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002ecd6840 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002ecd6900 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002ecfde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002eeb0380 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002eeb0440 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002eeb0500 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002eefde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f063940 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f063a00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f063ac0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f063b80 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f063c40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f063d00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f0fde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f2fde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f4ff480 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f4ff540 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f4ff600 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002f4ff6c0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20002fefde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000300ff6c0 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200030cefc40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200030cefd00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200030effd40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003107da80 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003107db40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000310fde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x2000312bc740 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200032895380 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200032895440 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003fcfde00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003fe68f80 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003fe69040 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003fe6fc40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003fe6fe40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x20003fe6ff00 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200049c7da80 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200049c7db40 with size: 0.000183 MiB 00:11:19.526 element at address: 0x200049cfde00 with size: 0.000183 MiB 00:11:19.526 list of memzone associated elements. size: 962.192688 MiB 00:11:19.526 element at address: 0x200032895500 with size: 211.416748 MiB 00:11:19.526 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:19.526 element at address: 0x20003fe6ffc0 with size: 157.562561 MiB 00:11:19.526 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:19.526 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:11:19.526 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3840078_0 00:11:19.526 element at address: 0x20002a0d8a80 with size: 75.153687 MiB 00:11:19.526 associated memzone info: size: 75.153564 MiB name: MP_ocf_env_12:ocf_mio_8_0 00:11:19.526 element at address: 0x2000212ea180 with size: 57.085571 MiB 00:11:19.526 associated memzone info: size: 57.085449 MiB name: MP_ocf_env_8:ocf_req_128_0 00:11:19.526 element at address: 0x200000dff380 with size: 48.003052 MiB 00:11:19.526 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3840078_0 00:11:19.526 element at address: 0x2000276fd980 with size: 39.009399 MiB 00:11:19.526 associated memzone info: size: 39.009277 MiB name: MP_ocf_env_11:ocf_mio_4_0 00:11:19.526 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:11:19.526 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3840078_0 00:11:19.526 element at address: 0x20001eef7f80 with size: 33.031372 MiB 00:11:19.526 associated memzone info: size: 33.031250 MiB name: MP_ocf_env_7:ocf_req_64_0 00:11:19.526 element at address: 0x20001d6feb80 with size: 21.005005 MiB 00:11:19.526 associated memzone info: size: 21.004883 MiB name: MP_ocf_env_6:ocf_req_32_0 00:11:19.526 element at address: 0x200025efeb80 with size: 21.005005 MiB 00:11:19.526 associated memzone info: size: 21.004883 MiB name: MP_ocf_env_10:ocf_mio_2_0 00:11:19.526 element at address: 0x2000313be940 with size: 20.255554 MiB 00:11:19.526 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:19.526 element at address: 0x200049ffeb40 with size: 18.005066 MiB 00:11:19.526 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:19.526 element at address: 0x20001c4ff180 with size: 15.003540 MiB 00:11:19.526 associated memzone info: size: 15.003418 MiB name: MP_ocf_env_5:ocf_req_16_0 00:11:19.526 element at address: 0x20001b4ff380 with size: 13.003052 MiB 00:11:19.526 associated memzone info: size: 13.002930 MiB name: MP_ocf_env_4:ocf_req_8_0 00:11:19.526 element at address: 0x200024eff380 with size: 13.003052 MiB 00:11:19.526 associated memzone info: size: 13.002930 MiB name: MP_ocf_env_9:ocf_mio_1_0 00:11:19.526 element at address: 0x20001a8ff300 with size: 11.003174 MiB 00:11:19.526 associated memzone info: size: 11.003052 MiB name: MP_ocf_env_3:ocf_req_4_0 00:11:19.526 element at address: 0x2000194ff780 with size: 9.002075 MiB 00:11:19.526 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_1:ocf_req_1_0 00:11:19.526 element at address: 0x200019eff780 with size: 9.002075 MiB 00:11:19.526 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_2:ocf_req_2_0 00:11:19.526 element at address: 0x20002f4ff780 with size: 9.002075 MiB 00:11:19.526 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_16:OCF Composit_0 00:11:19.526 element at address: 0x2000300ff780 with size: 9.002075 MiB 00:11:19.526 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_17:SPDK_block_d_0 00:11:19.526 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:11:19.526 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3840078_0 00:11:19.526 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:11:19.526 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3840078 00:11:19.526 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3840078 00:11:19.526 element at address: 0x200012cefd40 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_1:ocf_req_1 00:11:19.526 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_2:ocf_req_2 00:11:19.526 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_3:ocf_req_4 00:11:19.526 element at address: 0x200003efba40 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_4:ocf_req_8 00:11:19.526 element at address: 0x20001c2fdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_5:ocf_req_16 00:11:19.526 element at address: 0x20001d4fdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_6:ocf_req_32 00:11:19.526 element at address: 0x20001ecfdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_7:ocf_req_64 00:11:19.526 element at address: 0x2000210fdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_8:ocf_req_128 00:11:19.526 element at address: 0x200024cfdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_9:ocf_mio_1 00:11:19.526 element at address: 0x200025cfdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_10:ocf_mio_2 00:11:19.526 element at address: 0x2000274fdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_11:ocf_mio_4 00:11:19.526 element at address: 0x200029efdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_12:ocf_mio_8 00:11:19.526 element at address: 0x20002ecfdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_13:ocf_mio_16 00:11:19.526 element at address: 0x20002eefdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_14:ocf_mio_32 00:11:19.526 element at address: 0x20002f0fdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_15:ocf_mio_64 00:11:19.526 element at address: 0x20002f2fdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_16:OCF Composit 00:11:19.526 element at address: 0x20002fefdec0 with size: 1.008118 MiB 00:11:19.526 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_17:SPDK_block_d 00:11:19.527 element at address: 0x2000310fdec0 with size: 1.008118 MiB 00:11:19.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:19.527 element at address: 0x2000312bc800 with size: 1.008118 MiB 00:11:19.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:19.527 element at address: 0x20003fcfdec0 with size: 1.008118 MiB 00:11:19.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:19.527 element at address: 0x200049cfdec0 with size: 1.008118 MiB 00:11:19.527 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:19.527 element at address: 0x200000cff180 with size: 1.000488 MiB 00:11:19.527 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3840078 00:11:19.527 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:11:19.527 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3840078 00:11:19.527 element at address: 0x200030effe00 with size: 1.000488 MiB 00:11:19.527 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3840078 00:11:19.527 element at address: 0x200049efe940 with size: 1.000488 MiB 00:11:19.527 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3840078 00:11:19.527 element at address: 0x20002f063dc0 with size: 0.600891 MiB 00:11:19.527 associated memzone info: size: 0.600769 MiB name: MP_ocf_env_15:ocf_mio_64_0 00:11:19.527 element at address: 0x20000087f740 with size: 0.500488 MiB 00:11:19.527 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3840078 00:11:19.527 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:11:19.527 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3840078 00:11:19.527 element at address: 0x20003107dc00 with size: 0.500488 MiB 00:11:19.527 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:19.527 element at address: 0x200049c7dc00 with size: 0.500488 MiB 00:11:19.527 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:19.527 element at address: 0x20002eeb05c0 with size: 0.302063 MiB 00:11:19.527 associated memzone info: size: 0.301941 MiB name: MP_ocf_env_14:ocf_mio_32_0 00:11:19.527 element at address: 0x20003127c540 with size: 0.250488 MiB 00:11:19.527 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:19.527 element at address: 0x20002ecd69c0 with size: 0.152649 MiB 00:11:19.527 associated memzone info: size: 0.152527 MiB name: MP_ocf_env_13:ocf_mio_16_0 00:11:19.527 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3840078 00:11:19.527 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3840078 00:11:19.527 element at address: 0x200012ccfa80 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_1:ocf_req_1 00:11:19.527 element at address: 0x20000a6ddb80 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_2:ocf_req_2 00:11:19.527 element at address: 0x2000064ddb80 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_3:ocf_req_4 00:11:19.527 element at address: 0x200003edb780 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_4:ocf_req_8 00:11:19.527 element at address: 0x20001c2ddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_5:ocf_req_16 00:11:19.527 element at address: 0x20001d4ddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_6:ocf_req_32 00:11:19.527 element at address: 0x20001ecddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_7:ocf_req_64 00:11:19.527 element at address: 0x2000210ddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_8:ocf_req_128 00:11:19.527 element at address: 0x200024cddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_9:ocf_mio_1 00:11:19.527 element at address: 0x200025cddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_10:ocf_mio_2 00:11:19.527 element at address: 0x2000274ddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_11:ocf_mio_4 00:11:19.527 element at address: 0x200029eddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_12:ocf_mio_8 00:11:19.527 element at address: 0x20002f2ddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_16:OCF Composit 00:11:19.527 element at address: 0x20002feddc00 with size: 0.125488 MiB 00:11:19.527 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_17:SPDK_block_d 00:11:19.527 element at address: 0x20003fcf5c00 with size: 0.031738 MiB 00:11:19.527 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:19.527 element at address: 0x20003fe69100 with size: 0.023743 MiB 00:11:19.527 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:19.527 element at address: 0x20000085b100 with size: 0.016113 MiB 00:11:19.527 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3840078 00:11:19.527 element at address: 0x20003fe6f240 with size: 0.002441 MiB 00:11:19.527 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:19.527 element at address: 0x20002ecfdb00 with size: 0.000732 MiB 00:11:19.527 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_13:ocf_mio_16 00:11:19.527 element at address: 0x20002eefdb00 with size: 0.000732 MiB 00:11:19.527 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_14:ocf_mio_32 00:11:19.527 element at address: 0x20002f0fdb00 with size: 0.000732 MiB 00:11:19.527 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_15:ocf_mio_64 00:11:19.527 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:11:19.527 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3840078 00:11:19.527 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:11:19.527 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3840078 00:11:19.527 element at address: 0x20000085af00 with size: 0.000305 MiB 00:11:19.527 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3840078 00:11:19.527 element at address: 0x20003fe6fd00 with size: 0.000305 MiB 00:11:19.527 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:19.527 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:19.527 13:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3840078 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3840078 ']' 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3840078 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840078 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840078' 00:11:19.527 killing process with pid 3840078 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3840078 00:11:19.527 13:42:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3840078 00:11:20.096 00:11:20.096 real 0m1.566s 00:11:20.096 user 0m1.397s 00:11:20.096 sys 0m0.630s 00:11:20.096 13:42:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.096 13:42:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:20.096 ************************************ 00:11:20.096 END TEST dpdk_mem_utility 00:11:20.096 ************************************ 00:11:20.096 13:42:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh 00:11:20.096 13:42:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.096 13:42:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.096 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:11:20.096 ************************************ 00:11:20.096 START TEST event 00:11:20.096 ************************************ 00:11:20.096 13:42:51 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh 00:11:20.096 * Looking for test storage... 00:11:20.096 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event 00:11:20.096 13:42:51 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.096 13:42:51 event -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.096 13:42:51 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.355 13:42:51 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.356 13:42:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.356 13:42:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.356 13:42:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.356 13:42:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.356 13:42:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.356 13:42:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.356 13:42:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.356 13:42:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.356 13:42:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.356 13:42:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.356 13:42:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.356 13:42:51 event -- scripts/common.sh@344 -- # case "$op" in 00:11:20.356 13:42:51 event -- scripts/common.sh@345 -- # : 1 00:11:20.356 13:42:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.356 13:42:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.356 13:42:51 event -- scripts/common.sh@365 -- # decimal 1 00:11:20.356 13:42:51 event -- scripts/common.sh@353 -- # local d=1 00:11:20.356 13:42:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.356 13:42:51 event -- scripts/common.sh@355 -- # echo 1 00:11:20.356 13:42:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.356 13:42:51 event -- scripts/common.sh@366 -- # decimal 2 00:11:20.356 13:42:51 event -- scripts/common.sh@353 -- # local d=2 00:11:20.356 13:42:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.356 13:42:51 event -- scripts/common.sh@355 -- # echo 2 00:11:20.356 13:42:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.356 13:42:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.356 13:42:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.356 13:42:51 event -- scripts/common.sh@368 -- # return 0 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.356 --rc genhtml_branch_coverage=1 00:11:20.356 --rc genhtml_function_coverage=1 00:11:20.356 --rc genhtml_legend=1 00:11:20.356 --rc geninfo_all_blocks=1 00:11:20.356 --rc geninfo_unexecuted_blocks=1 00:11:20.356 00:11:20.356 ' 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.356 --rc genhtml_branch_coverage=1 00:11:20.356 --rc genhtml_function_coverage=1 00:11:20.356 --rc genhtml_legend=1 00:11:20.356 --rc geninfo_all_blocks=1 00:11:20.356 --rc geninfo_unexecuted_blocks=1 00:11:20.356 00:11:20.356 ' 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.356 --rc genhtml_branch_coverage=1 00:11:20.356 --rc genhtml_function_coverage=1 00:11:20.356 --rc genhtml_legend=1 00:11:20.356 --rc geninfo_all_blocks=1 00:11:20.356 --rc geninfo_unexecuted_blocks=1 00:11:20.356 00:11:20.356 ' 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.356 --rc genhtml_branch_coverage=1 00:11:20.356 --rc genhtml_function_coverage=1 00:11:20.356 --rc genhtml_legend=1 00:11:20.356 --rc geninfo_all_blocks=1 00:11:20.356 --rc geninfo_unexecuted_blocks=1 00:11:20.356 00:11:20.356 ' 00:11:20.356 13:42:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:20.356 13:42:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:20.356 13:42:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:20.356 13:42:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.356 13:42:51 event -- common/autotest_common.sh@10 -- # set +x 00:11:20.356 ************************************ 00:11:20.356 START TEST event_perf 00:11:20.356 ************************************ 00:11:20.356 13:42:51 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:20.356 Running I/O for 1 seconds...[2024-12-05 13:42:51.786851] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:20.356 [2024-12-05 13:42:51.786936] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840324 ] 00:11:20.615 [2024-12-05 13:42:51.898003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.615 [2024-12-05 13:42:51.957573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.615 [2024-12-05 13:42:51.957676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.615 [2024-12-05 13:42:51.957725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.615 [2024-12-05 13:42:51.957730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.552 Running I/O for 1 seconds... 00:11:21.552 lcore 0: 185264 00:11:21.552 lcore 1: 185259 00:11:21.552 lcore 2: 185261 00:11:21.552 lcore 3: 185262 00:11:21.552 done. 00:11:21.552 00:11:21.552 real 0m1.252s 00:11:21.552 user 0m4.134s 00:11:21.552 sys 0m0.111s 00:11:21.552 13:42:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.552 13:42:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 ************************************ 00:11:21.552 END TEST event_perf 00:11:21.552 ************************************ 00:11:21.552 13:42:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:11:21.552 13:42:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.552 13:42:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.552 13:42:53 event -- common/autotest_common.sh@10 -- # set +x 00:11:21.811 ************************************ 00:11:21.811 START TEST event_reactor 00:11:21.811 ************************************ 00:11:21.811 13:42:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:11:21.811 [2024-12-05 13:42:53.116930] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:21.811 [2024-12-05 13:42:53.117008] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840525 ] 00:11:21.811 [2024-12-05 13:42:53.235101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.811 [2024-12-05 13:42:53.289347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.223 test_start 00:11:23.223 oneshot 00:11:23.223 tick 100 00:11:23.223 tick 100 00:11:23.223 tick 250 00:11:23.223 tick 100 00:11:23.223 tick 100 00:11:23.223 tick 100 00:11:23.223 tick 250 00:11:23.223 tick 500 00:11:23.223 tick 100 00:11:23.223 tick 100 00:11:23.223 tick 250 00:11:23.223 tick 100 00:11:23.223 tick 100 00:11:23.223 test_end 00:11:23.223 00:11:23.223 real 0m1.246s 00:11:23.223 user 0m1.131s 00:11:23.223 sys 0m0.108s 00:11:23.223 13:42:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.223 13:42:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:23.223 ************************************ 00:11:23.223 END TEST event_reactor 00:11:23.223 ************************************ 00:11:23.223 13:42:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:23.223 13:42:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.223 13:42:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.223 13:42:54 event -- common/autotest_common.sh@10 -- # set +x 00:11:23.223 ************************************ 00:11:23.223 START TEST event_reactor_perf 00:11:23.223 ************************************ 00:11:23.223 13:42:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:23.223 [2024-12-05 13:42:54.441029] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:23.223 [2024-12-05 13:42:54.441087] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840718 ] 00:11:23.223 [2024-12-05 13:42:54.550901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.223 [2024-12-05 13:42:54.607033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.158 test_start 00:11:24.158 test_end 00:11:24.158 Performance: 327072 events per second 00:11:24.158 00:11:24.158 real 0m1.240s 00:11:24.158 user 0m1.133s 00:11:24.158 sys 0m0.100s 00:11:24.158 13:42:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.158 13:42:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:24.158 ************************************ 00:11:24.158 END TEST event_reactor_perf 00:11:24.158 ************************************ 00:11:24.418 13:42:55 event -- event/event.sh@49 -- # uname -s 00:11:24.418 13:42:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:24.418 13:42:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:11:24.418 13:42:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.418 13:42:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.418 13:42:55 event -- common/autotest_common.sh@10 -- # set +x 00:11:24.418 ************************************ 00:11:24.418 START TEST event_scheduler 00:11:24.418 ************************************ 00:11:24.418 13:42:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:11:24.418 * Looking for test storage... 00:11:24.418 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler 00:11:24.418 13:42:55 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:24.418 13:42:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:11:24.418 13:42:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:24.418 13:42:55 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.418 13:42:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.419 13:42:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.419 --rc genhtml_branch_coverage=1 00:11:24.419 --rc genhtml_function_coverage=1 00:11:24.419 --rc genhtml_legend=1 00:11:24.419 --rc geninfo_all_blocks=1 00:11:24.419 --rc geninfo_unexecuted_blocks=1 00:11:24.419 00:11:24.419 ' 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.419 --rc genhtml_branch_coverage=1 00:11:24.419 --rc genhtml_function_coverage=1 00:11:24.419 --rc genhtml_legend=1 00:11:24.419 --rc geninfo_all_blocks=1 00:11:24.419 --rc geninfo_unexecuted_blocks=1 00:11:24.419 00:11:24.419 ' 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.419 --rc genhtml_branch_coverage=1 00:11:24.419 --rc genhtml_function_coverage=1 00:11:24.419 --rc genhtml_legend=1 00:11:24.419 --rc geninfo_all_blocks=1 00:11:24.419 --rc geninfo_unexecuted_blocks=1 00:11:24.419 00:11:24.419 ' 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.419 --rc genhtml_branch_coverage=1 00:11:24.419 --rc genhtml_function_coverage=1 00:11:24.419 --rc genhtml_legend=1 00:11:24.419 --rc geninfo_all_blocks=1 00:11:24.419 --rc geninfo_unexecuted_blocks=1 00:11:24.419 00:11:24.419 ' 00:11:24.419 13:42:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:24.419 13:42:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3840955 00:11:24.419 13:42:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:24.419 13:42:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:24.419 13:42:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3840955 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3840955 ']' 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.419 13:42:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:24.677 [2024-12-05 13:42:55.968931] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:24.677 [2024-12-05 13:42:55.968990] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840955 ] 00:11:24.677 [2024-12-05 13:42:56.048432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.677 [2024-12-05 13:42:56.096684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.677 [2024-12-05 13:42:56.096767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.677 [2024-12-05 13:42:56.096856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.677 [2024-12-05 13:42:56.096859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:24.935 13:42:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:24.935 [2024-12-05 13:42:56.209636] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:11:24.935 [2024-12-05 13:42:56.209658] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:24.935 [2024-12-05 13:42:56.209669] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:24.935 [2024-12-05 13:42:56.209677] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:24.935 [2024-12-05 13:42:56.209685] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.935 13:42:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:24.935 [2024-12-05 13:42:56.393815] 'OCF_Core' volume operations registered 00:11:24.935 [2024-12-05 13:42:56.393855] 'OCF_Cache' volume operations registered 00:11:24.935 [2024-12-05 13:42:56.397553] 'OCF Composite' volume operations registered 00:11:24.935 [2024-12-05 13:42:56.401275] 'SPDK_block_device' volume operations registered 00:11:24.935 [2024-12-05 13:42:56.402256] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.935 13:42:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.935 13:42:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:24.935 ************************************ 00:11:24.935 START TEST scheduler_create_thread 00:11:24.935 ************************************ 00:11:24.935 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:24.935 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:24.935 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.935 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 2 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 3 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 4 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 5 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 6 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 7 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 8 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 9 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 10 00:11:25.194 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.195 13:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:25.763 13:42:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.763 13:42:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:25.763 13:42:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.763 13:42:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 13:42:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.141 13:42:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:27.141 13:42:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:27.141 13:42:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.141 13:42:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:28.079 13:42:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.079 00:11:28.079 real 0m3.100s 00:11:28.079 user 0m0.026s 00:11:28.079 sys 0m0.005s 00:11:28.079 13:42:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.080 13:42:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:28.080 ************************************ 00:11:28.080 END TEST scheduler_create_thread 00:11:28.080 ************************************ 00:11:28.080 13:42:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:28.080 13:42:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3840955 00:11:28.080 13:42:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3840955 ']' 00:11:28.080 13:42:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3840955 00:11:28.080 13:42:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:28.080 13:42:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.080 13:42:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840955 00:11:28.339 13:42:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:28.339 13:42:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:28.339 13:42:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840955' 00:11:28.339 killing process with pid 3840955 00:11:28.339 13:42:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3840955 00:11:28.339 13:42:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3840955 00:11:28.599 [2024-12-05 13:42:59.926170] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:28.860 00:11:28.860 real 0m4.535s 00:11:28.860 user 0m7.581s 00:11:28.860 sys 0m0.534s 00:11:28.860 13:43:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.860 13:43:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:28.860 ************************************ 00:11:28.860 END TEST event_scheduler 00:11:28.860 ************************************ 00:11:28.860 13:43:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:28.860 13:43:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:28.860 13:43:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.860 13:43:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.860 13:43:00 event -- common/autotest_common.sh@10 -- # set +x 00:11:28.860 ************************************ 00:11:28.860 START TEST app_repeat 00:11:28.860 ************************************ 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3841693 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3841693' 00:11:28.860 Process app_repeat pid: 3841693 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:28.860 spdk_app_start Round 0 00:11:28.860 13:43:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3841693 /var/tmp/spdk-nbd.sock 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3841693 ']' 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:28.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.860 13:43:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:29.119 [2024-12-05 13:43:00.398688] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:29.119 [2024-12-05 13:43:00.398754] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841693 ] 00:11:29.119 [2024-12-05 13:43:00.506493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:29.119 [2024-12-05 13:43:00.561431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.119 [2024-12-05 13:43:00.561437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.378 13:43:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.378 13:43:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:29.378 13:43:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:29.637 Malloc0 00:11:29.637 13:43:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:29.896 Malloc1 00:11:29.896 13:43:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.896 13:43:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:30.155 /dev/nbd0 00:11:30.155 13:43:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:30.155 13:43:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:30.155 1+0 records in 00:11:30.155 1+0 records out 00:11:30.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266881 s, 15.3 MB/s 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.155 13:43:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:30.155 13:43:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.155 13:43:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:30.155 13:43:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:30.415 /dev/nbd1 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:30.415 1+0 records in 00:11:30.415 1+0 records out 00:11:30.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250349 s, 16.4 MB/s 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.415 13:43:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.415 13:43:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:30.675 { 00:11:30.675 "nbd_device": "/dev/nbd0", 00:11:30.675 "bdev_name": "Malloc0" 00:11:30.675 }, 00:11:30.675 { 00:11:30.675 "nbd_device": "/dev/nbd1", 00:11:30.675 "bdev_name": "Malloc1" 00:11:30.675 } 00:11:30.675 ]' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:30.675 { 00:11:30.675 "nbd_device": "/dev/nbd0", 00:11:30.675 "bdev_name": "Malloc0" 00:11:30.675 }, 00:11:30.675 { 00:11:30.675 "nbd_device": "/dev/nbd1", 00:11:30.675 "bdev_name": "Malloc1" 00:11:30.675 } 00:11:30.675 ]' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:30.675 /dev/nbd1' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:30.675 /dev/nbd1' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:30.675 256+0 records in 00:11:30.675 256+0 records out 00:11:30.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113136 s, 92.7 MB/s 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:30.675 256+0 records in 00:11:30.675 256+0 records out 00:11:30.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200372 s, 52.3 MB/s 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.675 13:43:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:30.934 256+0 records in 00:11:30.934 256+0 records out 00:11:30.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296012 s, 35.4 MB/s 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:30.934 13:43:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.193 13:43:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.452 13:43:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.452 13:43:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.452 13:43:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.711 13:43:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.711 13:43:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.711 13:43:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.711 13:43:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:31.711 13:43:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.711 13:43:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.711 13:43:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:31.711 13:43:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:31.711 13:43:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:31.711 13:43:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:31.971 13:43:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:32.231 [2024-12-05 13:43:03.508643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:32.231 [2024-12-05 13:43:03.562820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.231 [2024-12-05 13:43:03.562826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.231 [2024-12-05 13:43:03.609208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:32.231 [2024-12-05 13:43:03.609259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:35.522 13:43:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:35.522 13:43:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:35.522 spdk_app_start Round 1 00:11:35.522 13:43:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3841693 /var/tmp/spdk-nbd.sock 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3841693 ']' 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:35.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.522 13:43:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:35.522 13:43:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:35.522 Malloc0 00:11:35.522 13:43:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:35.781 Malloc1 00:11:35.781 13:43:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.781 13:43:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:35.781 /dev/nbd0 00:11:36.041 13:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:36.041 13:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:36.041 1+0 records in 00:11:36.041 1+0 records out 00:11:36.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247915 s, 16.5 MB/s 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:36.041 13:43:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:36.041 13:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.041 13:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:36.041 13:43:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:36.300 /dev/nbd1 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:36.300 1+0 records in 00:11:36.300 1+0 records out 00:11:36.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264109 s, 15.5 MB/s 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:36.300 13:43:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.300 13:43:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:36.560 { 00:11:36.560 "nbd_device": "/dev/nbd0", 00:11:36.560 "bdev_name": "Malloc0" 00:11:36.560 }, 00:11:36.560 { 00:11:36.560 "nbd_device": "/dev/nbd1", 00:11:36.560 "bdev_name": "Malloc1" 00:11:36.560 } 00:11:36.560 ]' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:36.560 { 00:11:36.560 "nbd_device": "/dev/nbd0", 00:11:36.560 "bdev_name": "Malloc0" 00:11:36.560 }, 00:11:36.560 { 00:11:36.560 "nbd_device": "/dev/nbd1", 00:11:36.560 "bdev_name": "Malloc1" 00:11:36.560 } 00:11:36.560 ]' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:36.560 /dev/nbd1' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:36.560 /dev/nbd1' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:36.560 13:43:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:36.560 256+0 records in 00:11:36.560 256+0 records out 00:11:36.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110744 s, 94.7 MB/s 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:36.560 256+0 records in 00:11:36.560 256+0 records out 00:11:36.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182107 s, 57.6 MB/s 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:36.560 256+0 records in 00:11:36.560 256+0 records out 00:11:36.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227126 s, 46.2 MB/s 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.560 13:43:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.820 13:43:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:37.079 13:43:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.080 13:43:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:37.080 13:43:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.080 13:43:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:37.339 13:43:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:37.339 13:43:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:37.908 13:43:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:37.908 [2024-12-05 13:43:09.353580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:37.908 [2024-12-05 13:43:09.407532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.908 [2024-12-05 13:43:09.407537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.167 [2024-12-05 13:43:09.455534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:38.167 [2024-12-05 13:43:09.455585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:40.698 13:43:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:40.698 13:43:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:40.698 spdk_app_start Round 2 00:11:40.698 13:43:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3841693 /var/tmp/spdk-nbd.sock 00:11:40.698 13:43:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3841693 ']' 00:11:40.698 13:43:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:40.698 13:43:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.698 13:43:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:40.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:40.698 13:43:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.698 13:43:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:40.956 13:43:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.956 13:43:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:40.956 13:43:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:41.214 Malloc0 00:11:41.214 13:43:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:41.472 Malloc1 00:11:41.472 13:43:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:41.472 13:43:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:42.040 /dev/nbd0 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:42.040 1+0 records in 00:11:42.040 1+0 records out 00:11:42.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156977 s, 26.1 MB/s 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.040 13:43:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:42.040 /dev/nbd1 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:42.040 13:43:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:42.041 1+0 records in 00:11:42.041 1+0 records out 00:11:42.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185163 s, 22.1 MB/s 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.041 13:43:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:42.041 13:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.041 13:43:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.041 13:43:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:42.041 13:43:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.041 13:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:42.300 13:43:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:42.300 { 00:11:42.300 "nbd_device": "/dev/nbd0", 00:11:42.300 "bdev_name": "Malloc0" 00:11:42.300 }, 00:11:42.300 { 00:11:42.300 "nbd_device": "/dev/nbd1", 00:11:42.300 "bdev_name": "Malloc1" 00:11:42.300 } 00:11:42.300 ]' 00:11:42.559 13:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:42.559 { 00:11:42.559 "nbd_device": "/dev/nbd0", 00:11:42.559 "bdev_name": "Malloc0" 00:11:42.559 }, 00:11:42.559 { 00:11:42.559 "nbd_device": "/dev/nbd1", 00:11:42.559 "bdev_name": "Malloc1" 00:11:42.559 } 00:11:42.559 ]' 00:11:42.559 13:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:42.559 13:43:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:42.559 /dev/nbd1' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:42.560 /dev/nbd1' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:42.560 256+0 records in 00:11:42.560 256+0 records out 00:11:42.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104563 s, 100 MB/s 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:42.560 256+0 records in 00:11:42.560 256+0 records out 00:11:42.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271194 s, 38.7 MB/s 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:42.560 256+0 records in 00:11:42.560 256+0 records out 00:11:42.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267594 s, 39.2 MB/s 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.560 13:43:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.819 13:43:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:43.078 13:43:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:43.648 13:43:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:43.648 13:43:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:43.907 13:43:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:43.907 [2024-12-05 13:43:15.391581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:44.166 [2024-12-05 13:43:15.445415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.166 [2024-12-05 13:43:15.445422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.166 [2024-12-05 13:43:15.496573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:44.166 [2024-12-05 13:43:15.496636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:46.701 13:43:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3841693 /var/tmp/spdk-nbd.sock 00:11:46.701 13:43:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3841693 ']' 00:11:46.701 13:43:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:46.701 13:43:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.701 13:43:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:46.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:46.701 13:43:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.701 13:43:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:47.268 13:43:18 event.app_repeat -- event/event.sh@39 -- # killprocess 3841693 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3841693 ']' 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3841693 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841693 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841693' 00:11:47.268 killing process with pid 3841693 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3841693 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3841693 00:11:47.268 spdk_app_start is called in Round 0. 00:11:47.268 Shutdown signal received, stop current app iteration 00:11:47.268 Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 reinitialization... 00:11:47.268 spdk_app_start is called in Round 1. 00:11:47.268 Shutdown signal received, stop current app iteration 00:11:47.268 Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 reinitialization... 00:11:47.268 spdk_app_start is called in Round 2. 00:11:47.268 Shutdown signal received, stop current app iteration 00:11:47.268 Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 reinitialization... 00:11:47.268 spdk_app_start is called in Round 3. 00:11:47.268 Shutdown signal received, stop current app iteration 00:11:47.268 13:43:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:47.268 13:43:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:47.268 00:11:47.268 real 0m18.366s 00:11:47.268 user 0m40.526s 00:11:47.268 sys 0m3.640s 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.268 13:43:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:47.268 ************************************ 00:11:47.268 END TEST app_repeat 00:11:47.268 ************************************ 00:11:47.268 13:43:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:47.268 13:43:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:47.269 13:43:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.269 13:43:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.269 13:43:18 event -- common/autotest_common.sh@10 -- # set +x 00:11:47.527 ************************************ 00:11:47.527 START TEST cpu_locks 00:11:47.527 ************************************ 00:11:47.527 13:43:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:47.527 * Looking for test storage... 00:11:47.527 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event 00:11:47.527 13:43:18 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.527 13:43:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.527 13:43:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.527 13:43:18 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.527 13:43:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.527 13:43:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.527 13:43:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.527 13:43:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.528 13:43:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:47.528 13:43:18 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.528 13:43:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.528 --rc genhtml_branch_coverage=1 00:11:47.528 --rc genhtml_function_coverage=1 00:11:47.528 --rc genhtml_legend=1 00:11:47.528 --rc geninfo_all_blocks=1 00:11:47.528 --rc geninfo_unexecuted_blocks=1 00:11:47.528 00:11:47.528 ' 00:11:47.528 13:43:18 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.528 --rc genhtml_branch_coverage=1 00:11:47.528 --rc genhtml_function_coverage=1 00:11:47.528 --rc genhtml_legend=1 00:11:47.528 --rc geninfo_all_blocks=1 00:11:47.528 --rc geninfo_unexecuted_blocks=1 00:11:47.528 00:11:47.528 ' 00:11:47.528 13:43:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.528 --rc genhtml_branch_coverage=1 00:11:47.528 --rc genhtml_function_coverage=1 00:11:47.528 --rc genhtml_legend=1 00:11:47.528 --rc geninfo_all_blocks=1 00:11:47.528 --rc geninfo_unexecuted_blocks=1 00:11:47.528 00:11:47.528 ' 00:11:47.528 13:43:18 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.528 --rc genhtml_branch_coverage=1 00:11:47.528 --rc genhtml_function_coverage=1 00:11:47.528 --rc genhtml_legend=1 00:11:47.528 --rc geninfo_all_blocks=1 00:11:47.528 --rc geninfo_unexecuted_blocks=1 00:11:47.528 00:11:47.528 ' 00:11:47.528 13:43:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:47.528 13:43:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:47.528 13:43:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:47.528 13:43:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:47.528 13:43:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.528 13:43:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.528 13:43:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.528 ************************************ 00:11:47.528 START TEST default_locks 00:11:47.528 ************************************ 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3844328 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3844328 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3844328 ']' 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.528 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.790 [2024-12-05 13:43:19.084951] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:47.790 [2024-12-05 13:43:19.085012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844328 ] 00:11:47.790 [2024-12-05 13:43:19.192809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.790 [2024-12-05 13:43:19.253034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.049 [2024-12-05 13:43:19.477927] 'OCF_Core' volume operations registered 00:11:48.049 [2024-12-05 13:43:19.477966] 'OCF_Cache' volume operations registered 00:11:48.049 [2024-12-05 13:43:19.482398] 'OCF Composite' volume operations registered 00:11:48.049 [2024-12-05 13:43:19.486872] 'SPDK_block_device' volume operations registered 00:11:48.617 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.617 13:43:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:48.617 13:43:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3844328 00:11:48.617 13:43:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3844328 00:11:48.617 13:43:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:49.578 lslocks: write error 00:11:49.578 13:43:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3844328 00:11:49.578 13:43:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3844328 ']' 00:11:49.578 13:43:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3844328 00:11:49.578 13:43:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:49.578 13:43:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.578 13:43:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844328 00:11:49.578 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.578 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.578 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844328' 00:11:49.578 killing process with pid 3844328 00:11:49.578 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3844328 00:11:49.578 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3844328 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3844328 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3844328 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3844328 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3844328 ']' 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.146 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3844328) - No such process 00:11:50.146 ERROR: process (pid: 3844328) is no longer running 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:50.146 00:11:50.146 real 0m2.501s 00:11:50.146 user 0m2.572s 00:11:50.146 sys 0m1.057s 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.146 13:43:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.146 ************************************ 00:11:50.146 END TEST default_locks 00:11:50.146 ************************************ 00:11:50.146 13:43:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:50.146 13:43:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.146 13:43:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.146 13:43:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:50.146 ************************************ 00:11:50.146 START TEST default_locks_via_rpc 00:11:50.146 ************************************ 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3844666 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3844666 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3844666 ']' 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.146 13:43:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.405 [2024-12-05 13:43:21.694781] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:50.405 [2024-12-05 13:43:21.694853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844666 ] 00:11:50.405 [2024-12-05 13:43:21.815043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.405 [2024-12-05 13:43:21.871837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.664 [2024-12-05 13:43:22.097045] 'OCF_Core' volume operations registered 00:11:50.664 [2024-12-05 13:43:22.097083] 'OCF_Cache' volume operations registered 00:11:50.664 [2024-12-05 13:43:22.101489] 'OCF Composite' volume operations registered 00:11:50.664 [2024-12-05 13:43:22.105959] 'SPDK_block_device' volume operations registered 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:50.923 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3844666 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3844666 00:11:50.924 13:43:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3844666 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3844666 ']' 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3844666 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844666 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844666' 00:11:51.861 killing process with pid 3844666 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3844666 00:11:51.861 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3844666 00:11:52.433 00:11:52.433 real 0m2.223s 00:11:52.433 user 0m2.099s 00:11:52.433 sys 0m1.101s 00:11:52.433 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.433 13:43:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.433 ************************************ 00:11:52.433 END TEST default_locks_via_rpc 00:11:52.433 ************************************ 00:11:52.433 13:43:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:52.433 13:43:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.433 13:43:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.433 13:43:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:52.433 ************************************ 00:11:52.433 START TEST non_locking_app_on_locked_coremask 00:11:52.433 ************************************ 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3844984 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3844984 /var/tmp/spdk.sock 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3844984 ']' 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.433 13:43:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:52.734 [2024-12-05 13:43:23.995365] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:52.734 [2024-12-05 13:43:23.995446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844984 ] 00:11:52.734 [2024-12-05 13:43:24.117070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.734 [2024-12-05 13:43:24.172471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.032 [2024-12-05 13:43:24.402192] 'OCF_Core' volume operations registered 00:11:53.032 [2024-12-05 13:43:24.402230] 'OCF_Cache' volume operations registered 00:11:53.032 [2024-12-05 13:43:24.406688] 'OCF Composite' volume operations registered 00:11:53.032 [2024-12-05 13:43:24.411162] 'SPDK_block_device' volume operations registered 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3845164 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3845164 /var/tmp/spdk2.sock 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3845164 ']' 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:53.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.600 13:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:53.600 [2024-12-05 13:43:24.947166] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:53.600 [2024-12-05 13:43:24.947245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845164 ] 00:11:53.600 [2024-12-05 13:43:25.118489] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:53.600 [2024-12-05 13:43:25.118532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.859 [2024-12-05 13:43:25.237244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.425 [2024-12-05 13:43:25.666068] 'OCF_Core' volume operations registered 00:11:54.425 [2024-12-05 13:43:25.666101] 'OCF_Cache' volume operations registered 00:11:54.425 [2024-12-05 13:43:25.674596] 'OCF Composite' volume operations registered 00:11:54.425 [2024-12-05 13:43:25.683102] 'SPDK_block_device' volume operations registered 00:11:54.684 13:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.684 13:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:54.684 13:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3844984 00:11:54.684 13:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3844984 00:11:54.684 13:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:57.216 lslocks: write error 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3844984 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3844984 ']' 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3844984 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844984 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844984' 00:11:57.216 killing process with pid 3844984 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3844984 00:11:57.216 13:43:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3844984 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3845164 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3845164 ']' 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3845164 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3845164 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3845164' 00:11:58.150 killing process with pid 3845164 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3845164 00:11:58.150 13:43:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3845164 00:11:59.085 00:11:59.085 real 0m6.311s 00:11:59.085 user 0m6.593s 00:11:59.085 sys 0m2.452s 00:11:59.085 13:43:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.085 13:43:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.085 ************************************ 00:11:59.085 END TEST non_locking_app_on_locked_coremask 00:11:59.085 ************************************ 00:11:59.085 13:43:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:59.085 13:43:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:59.085 13:43:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.085 13:43:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:59.085 ************************************ 00:11:59.085 START TEST locking_app_on_unlocked_coremask 00:11:59.085 ************************************ 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3845897 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3845897 /var/tmp/spdk.sock 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3845897 ']' 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.085 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.086 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.086 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.086 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.086 [2024-12-05 13:43:30.389867] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:59.086 [2024-12-05 13:43:30.389941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845897 ] 00:11:59.086 [2024-12-05 13:43:30.514391] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:59.086 [2024-12-05 13:43:30.514432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.086 [2024-12-05 13:43:30.569707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.344 [2024-12-05 13:43:30.773613] 'OCF_Core' volume operations registered 00:11:59.344 [2024-12-05 13:43:30.773656] 'OCF_Cache' volume operations registered 00:11:59.344 [2024-12-05 13:43:30.777665] 'OCF Composite' volume operations registered 00:11:59.344 [2024-12-05 13:43:30.781694] 'SPDK_block_device' volume operations registered 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3845911 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3845911 /var/tmp/spdk2.sock 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3845911 ']' 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:59.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.603 13:43:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.603 [2024-12-05 13:43:30.990351] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:11:59.603 [2024-12-05 13:43:30.990426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845911 ] 00:11:59.862 [2024-12-05 13:43:31.162201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.862 [2024-12-05 13:43:31.272982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.430 [2024-12-05 13:43:31.694248] 'OCF_Core' volume operations registered 00:12:00.430 [2024-12-05 13:43:31.694289] 'OCF_Cache' volume operations registered 00:12:00.430 [2024-12-05 13:43:31.702729] 'OCF Composite' volume operations registered 00:12:00.430 [2024-12-05 13:43:31.711192] 'SPDK_block_device' volume operations registered 00:12:00.689 13:43:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.689 13:43:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:00.689 13:43:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3845911 00:12:00.689 13:43:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3845911 00:12:00.689 13:43:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:03.224 lslocks: write error 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3845897 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3845897 ']' 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3845897 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3845897 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3845897' 00:12:03.224 killing process with pid 3845897 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3845897 00:12:03.224 13:43:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3845897 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3845911 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3845911 ']' 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3845911 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3845911 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3845911' 00:12:04.164 killing process with pid 3845911 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3845911 00:12:04.164 13:43:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3845911 00:12:04.732 00:12:04.732 real 0m5.756s 00:12:04.732 user 0m5.897s 00:12:04.732 sys 0m2.346s 00:12:04.732 13:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.732 13:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:04.732 ************************************ 00:12:04.732 END TEST locking_app_on_unlocked_coremask 00:12:04.732 ************************************ 00:12:04.733 13:43:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:04.733 13:43:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.733 13:43:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.733 13:43:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:04.733 ************************************ 00:12:04.733 START TEST locking_app_on_locked_coremask 00:12:04.733 ************************************ 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3846645 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3846645 /var/tmp/spdk.sock 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3846645 ']' 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.733 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:04.733 [2024-12-05 13:43:36.226805] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:04.733 [2024-12-05 13:43:36.226877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846645 ] 00:12:04.992 [2024-12-05 13:43:36.350219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.992 [2024-12-05 13:43:36.406128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.251 [2024-12-05 13:43:36.621378] 'OCF_Core' volume operations registered 00:12:05.251 [2024-12-05 13:43:36.621418] 'OCF_Cache' volume operations registered 00:12:05.251 [2024-12-05 13:43:36.625814] 'OCF Composite' volume operations registered 00:12:05.251 [2024-12-05 13:43:36.630244] 'SPDK_block_device' volume operations registered 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3846814 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3846814 /var/tmp/spdk2.sock 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:05.509 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3846814 /var/tmp/spdk2.sock 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3846814 /var/tmp/spdk2.sock 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3846814 ']' 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:05.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.510 13:43:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:05.510 [2024-12-05 13:43:36.848381] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:05.510 [2024-12-05 13:43:36.848453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846814 ] 00:12:05.510 [2024-12-05 13:43:37.022761] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3846645 has claimed it. 00:12:05.510 [2024-12-05 13:43:37.022821] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:06.077 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3846814) - No such process 00:12:06.077 ERROR: process (pid: 3846814) is no longer running 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3846645 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:06.077 13:43:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3846645 00:12:07.455 lslocks: write error 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3846645 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3846645 ']' 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3846645 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3846645 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3846645' 00:12:07.455 killing process with pid 3846645 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3846645 00:12:07.455 13:43:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3846645 00:12:08.023 00:12:08.023 real 0m3.103s 00:12:08.023 user 0m3.277s 00:12:08.023 sys 0m1.318s 00:12:08.023 13:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.023 13:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:08.023 ************************************ 00:12:08.023 END TEST locking_app_on_locked_coremask 00:12:08.023 ************************************ 00:12:08.023 13:43:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:08.023 13:43:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.023 13:43:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.023 13:43:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:08.023 ************************************ 00:12:08.023 START TEST locking_overlapped_coremask 00:12:08.023 ************************************ 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3847178 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3847178 /var/tmp/spdk.sock 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3847178 ']' 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.023 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.024 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.024 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.024 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 [2024-12-05 13:43:39.413691] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:08.024 [2024-12-05 13:43:39.413764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847178 ] 00:12:08.024 [2024-12-05 13:43:39.536069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.283 [2024-12-05 13:43:39.594504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.283 [2024-12-05 13:43:39.594594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.283 [2024-12-05 13:43:39.594598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.283 [2024-12-05 13:43:39.801242] 'OCF_Core' volume operations registered 00:12:08.283 [2024-12-05 13:43:39.801282] 'OCF_Cache' volume operations registered 00:12:08.543 [2024-12-05 13:43:39.805726] 'OCF Composite' volume operations registered 00:12:08.543 [2024-12-05 13:43:39.810202] 'SPDK_block_device' volume operations registered 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3847203 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3847203 /var/tmp/spdk2.sock 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3847203 /var/tmp/spdk2.sock 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3847203 /var/tmp/spdk2.sock 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3847203 ']' 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:08.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.543 13:43:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:08.543 [2024-12-05 13:43:40.029025] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:08.543 [2024-12-05 13:43:40.029104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847203 ] 00:12:08.803 [2024-12-05 13:43:40.167128] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3847178 has claimed it. 00:12:08.803 [2024-12-05 13:43:40.167176] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:09.372 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3847203) - No such process 00:12:09.372 ERROR: process (pid: 3847203) is no longer running 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3847178 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3847178 ']' 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3847178 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847178 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847178' 00:12:09.372 killing process with pid 3847178 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3847178 00:12:09.372 13:43:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3847178 00:12:09.941 00:12:09.941 real 0m2.002s 00:12:09.941 user 0m5.407s 00:12:09.941 sys 0m0.650s 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:09.941 ************************************ 00:12:09.941 END TEST locking_overlapped_coremask 00:12:09.941 ************************************ 00:12:09.941 13:43:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:09.941 13:43:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.941 13:43:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.941 13:43:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:09.941 ************************************ 00:12:09.941 START TEST locking_overlapped_coremask_via_rpc 00:12:09.941 ************************************ 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3847417 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3847417 /var/tmp/spdk.sock 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3847417 ']' 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.941 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.942 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.942 13:43:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.942 [2024-12-05 13:43:41.458950] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:09.942 [2024-12-05 13:43:41.459002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847417 ] 00:12:10.201 [2024-12-05 13:43:41.566081] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:10.201 [2024-12-05 13:43:41.566121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:10.201 [2024-12-05 13:43:41.629024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.201 [2024-12-05 13:43:41.629112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.201 [2024-12-05 13:43:41.629117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.461 [2024-12-05 13:43:41.848337] 'OCF_Core' volume operations registered 00:12:10.461 [2024-12-05 13:43:41.848375] 'OCF_Cache' volume operations registered 00:12:10.461 [2024-12-05 13:43:41.852833] 'OCF Composite' volume operations registered 00:12:10.461 [2024-12-05 13:43:41.857312] 'SPDK_block_device' volume operations registered 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3847591 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3847591 /var/tmp/spdk2.sock 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3847591 ']' 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:11.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.030 13:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.030 [2024-12-05 13:43:42.439938] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:11.030 [2024-12-05 13:43:42.440011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847591 ] 00:12:11.289 [2024-12-05 13:43:42.572978] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:11.290 [2024-12-05 13:43:42.573012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.290 [2024-12-05 13:43:42.668810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.290 [2024-12-05 13:43:42.672659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:11.290 [2024-12-05 13:43:42.672660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.549 [2024-12-05 13:43:43.042671] 'OCF_Core' volume operations registered 00:12:11.549 [2024-12-05 13:43:43.042708] 'OCF_Cache' volume operations registered 00:12:11.549 [2024-12-05 13:43:43.050466] 'OCF Composite' volume operations registered 00:12:11.549 [2024-12-05 13:43:43.058263] 'SPDK_block_device' volume operations registered 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:12.117 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.118 [2024-12-05 13:43:43.537713] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3847417 has claimed it. 00:12:12.118 request: 00:12:12.118 { 00:12:12.118 "method": "framework_enable_cpumask_locks", 00:12:12.118 "req_id": 1 00:12:12.118 } 00:12:12.118 Got JSON-RPC error response 00:12:12.118 response: 00:12:12.118 { 00:12:12.118 "code": -32603, 00:12:12.118 "message": "Failed to claim CPU core: 2" 00:12:12.118 } 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3847417 /var/tmp/spdk.sock 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3847417 ']' 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.118 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3847591 /var/tmp/spdk2.sock 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3847591 ']' 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:12.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.376 13:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:12.635 00:12:12.635 real 0m2.707s 00:12:12.635 user 0m1.305s 00:12:12.635 sys 0m0.247s 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.635 13:43:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.635 ************************************ 00:12:12.635 END TEST locking_overlapped_coremask_via_rpc 00:12:12.635 ************************************ 00:12:12.894 13:43:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:12.894 13:43:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3847417 ]] 00:12:12.894 13:43:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3847417 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3847417 ']' 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3847417 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847417 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847417' 00:12:12.894 killing process with pid 3847417 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3847417 00:12:12.894 13:43:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3847417 00:12:13.462 13:43:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3847591 ]] 00:12:13.462 13:43:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3847591 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3847591 ']' 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3847591 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847591 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847591' 00:12:13.462 killing process with pid 3847591 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3847591 00:12:13.462 13:43:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3847591 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3847417 ]] 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3847417 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3847417 ']' 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3847417 00:12:14.029 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3847417) - No such process 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3847417 is not found' 00:12:14.029 Process with pid 3847417 is not found 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3847591 ]] 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3847591 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3847591 ']' 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3847591 00:12:14.029 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3847591) - No such process 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3847591 is not found' 00:12:14.029 Process with pid 3847591 is not found 00:12:14.029 13:43:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:14.029 00:12:14.029 real 0m26.643s 00:12:14.029 user 0m42.315s 00:12:14.029 sys 0m10.624s 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.029 13:43:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:14.029 ************************************ 00:12:14.029 END TEST cpu_locks 00:12:14.029 ************************************ 00:12:14.029 00:12:14.029 real 0m53.978s 00:12:14.029 user 1m37.142s 00:12:14.029 sys 0m15.550s 00:12:14.029 13:43:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.029 13:43:45 event -- common/autotest_common.sh@10 -- # set +x 00:12:14.029 ************************************ 00:12:14.029 END TEST event 00:12:14.029 ************************************ 00:12:14.029 13:43:45 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh 00:12:14.029 13:43:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.029 13:43:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.029 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:12:14.287 ************************************ 00:12:14.287 START TEST thread 00:12:14.287 ************************************ 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh 00:12:14.287 * Looking for test storage... 00:12:14.287 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.287 13:43:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.287 13:43:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.287 13:43:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.287 13:43:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.287 13:43:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.287 13:43:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.287 13:43:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.287 13:43:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.287 13:43:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.287 13:43:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.287 13:43:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.287 13:43:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:14.287 13:43:45 thread -- scripts/common.sh@345 -- # : 1 00:12:14.287 13:43:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.287 13:43:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.287 13:43:45 thread -- scripts/common.sh@365 -- # decimal 1 00:12:14.287 13:43:45 thread -- scripts/common.sh@353 -- # local d=1 00:12:14.287 13:43:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.287 13:43:45 thread -- scripts/common.sh@355 -- # echo 1 00:12:14.287 13:43:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.287 13:43:45 thread -- scripts/common.sh@366 -- # decimal 2 00:12:14.287 13:43:45 thread -- scripts/common.sh@353 -- # local d=2 00:12:14.287 13:43:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.287 13:43:45 thread -- scripts/common.sh@355 -- # echo 2 00:12:14.287 13:43:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.287 13:43:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.287 13:43:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.287 13:43:45 thread -- scripts/common.sh@368 -- # return 0 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.287 13:43:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.288 13:43:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.288 --rc genhtml_branch_coverage=1 00:12:14.288 --rc genhtml_function_coverage=1 00:12:14.288 --rc genhtml_legend=1 00:12:14.288 --rc geninfo_all_blocks=1 00:12:14.288 --rc geninfo_unexecuted_blocks=1 00:12:14.288 00:12:14.288 ' 00:12:14.288 13:43:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:14.288 13:43:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:14.288 13:43:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.288 13:43:45 thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.288 ************************************ 00:12:14.288 START TEST thread_poller_perf 00:12:14.288 ************************************ 00:12:14.288 13:43:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:14.288 [2024-12-05 13:43:45.807049] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:14.288 [2024-12-05 13:43:45.807094] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848056 ] 00:12:14.546 [2024-12-05 13:43:45.914340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.546 [2024-12-05 13:43:45.967841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.546 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:15.919 [2024-12-05T12:43:47.445Z] ====================================== 00:12:15.919 [2024-12-05T12:43:47.445Z] busy:2309208688 (cyc) 00:12:15.919 [2024-12-05T12:43:47.445Z] total_run_count: 266000 00:12:15.919 [2024-12-05T12:43:47.445Z] tsc_hz: 2300000000 (cyc) 00:12:15.919 [2024-12-05T12:43:47.445Z] ====================================== 00:12:15.919 [2024-12-05T12:43:47.445Z] poller_cost: 8681 (cyc), 3774 (nsec) 00:12:15.919 00:12:15.919 real 0m1.232s 00:12:15.919 user 0m1.131s 00:12:15.919 sys 0m0.095s 00:12:15.919 13:43:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.919 13:43:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:15.919 ************************************ 00:12:15.919 END TEST thread_poller_perf 00:12:15.919 ************************************ 00:12:15.919 13:43:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:15.919 13:43:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:15.919 13:43:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.919 13:43:47 thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.919 ************************************ 00:12:15.919 START TEST thread_poller_perf 00:12:15.919 ************************************ 00:12:15.919 13:43:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:15.919 [2024-12-05 13:43:47.134064] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:15.919 [2024-12-05 13:43:47.134138] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848250 ] 00:12:15.919 [2024-12-05 13:43:47.257981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.920 [2024-12-05 13:43:47.313464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.920 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:16.854 [2024-12-05T12:43:48.380Z] ====================================== 00:12:16.854 [2024-12-05T12:43:48.380Z] busy:2302809096 (cyc) 00:12:16.854 [2024-12-05T12:43:48.380Z] total_run_count: 3504000 00:12:16.854 [2024-12-05T12:43:48.380Z] tsc_hz: 2300000000 (cyc) 00:12:16.854 [2024-12-05T12:43:48.380Z] ====================================== 00:12:16.854 [2024-12-05T12:43:48.380Z] poller_cost: 657 (cyc), 285 (nsec) 00:12:16.854 00:12:16.854 real 0m1.258s 00:12:16.854 user 0m1.133s 00:12:16.854 sys 0m0.119s 00:12:16.854 13:43:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.854 13:43:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:16.854 ************************************ 00:12:16.854 END TEST thread_poller_perf 00:12:16.854 ************************************ 00:12:17.113 13:43:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:17.113 00:12:17.113 real 0m2.845s 00:12:17.113 user 0m2.420s 00:12:17.113 sys 0m0.436s 00:12:17.113 13:43:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.113 13:43:48 thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.113 ************************************ 00:12:17.113 END TEST thread 00:12:17.113 ************************************ 00:12:17.113 13:43:48 -- spdk/autotest.sh@171 -- # [[ 1 -eq 1 ]] 00:12:17.113 13:43:48 -- spdk/autotest.sh@172 -- # run_test accel /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh 00:12:17.113 13:43:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.113 13:43:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.113 13:43:48 -- common/autotest_common.sh@10 -- # set +x 00:12:17.113 ************************************ 00:12:17.113 START TEST accel 00:12:17.113 ************************************ 00:12:17.113 13:43:48 accel -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh 00:12:17.113 * Looking for test storage... 00:12:17.113 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel 00:12:17.113 13:43:48 accel -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.113 13:43:48 accel -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.113 13:43:48 accel -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.411 13:43:48 accel -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.411 13:43:48 accel -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.411 13:43:48 accel -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.411 13:43:48 accel -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.411 13:43:48 accel -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.411 13:43:48 accel -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.411 13:43:48 accel -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.411 13:43:48 accel -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.411 13:43:48 accel -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.411 13:43:48 accel -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.411 13:43:48 accel -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.411 13:43:48 accel -- scripts/common.sh@344 -- # case "$op" in 00:12:17.411 13:43:48 accel -- scripts/common.sh@345 -- # : 1 00:12:17.411 13:43:48 accel -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.411 13:43:48 accel -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.411 13:43:48 accel -- scripts/common.sh@365 -- # decimal 1 00:12:17.411 13:43:48 accel -- scripts/common.sh@353 -- # local d=1 00:12:17.411 13:43:48 accel -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.411 13:43:48 accel -- scripts/common.sh@355 -- # echo 1 00:12:17.411 13:43:48 accel -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.411 13:43:48 accel -- scripts/common.sh@366 -- # decimal 2 00:12:17.411 13:43:48 accel -- scripts/common.sh@353 -- # local d=2 00:12:17.411 13:43:48 accel -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.411 13:43:48 accel -- scripts/common.sh@355 -- # echo 2 00:12:17.411 13:43:48 accel -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.411 13:43:48 accel -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.411 13:43:48 accel -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.411 13:43:48 accel -- scripts/common.sh@368 -- # return 0 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.411 --rc genhtml_branch_coverage=1 00:12:17.411 --rc genhtml_function_coverage=1 00:12:17.411 --rc genhtml_legend=1 00:12:17.411 --rc geninfo_all_blocks=1 00:12:17.411 --rc geninfo_unexecuted_blocks=1 00:12:17.411 00:12:17.411 ' 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.411 --rc genhtml_branch_coverage=1 00:12:17.411 --rc genhtml_function_coverage=1 00:12:17.411 --rc genhtml_legend=1 00:12:17.411 --rc geninfo_all_blocks=1 00:12:17.411 --rc geninfo_unexecuted_blocks=1 00:12:17.411 00:12:17.411 ' 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.411 --rc genhtml_branch_coverage=1 00:12:17.411 --rc genhtml_function_coverage=1 00:12:17.411 --rc genhtml_legend=1 00:12:17.411 --rc geninfo_all_blocks=1 00:12:17.411 --rc geninfo_unexecuted_blocks=1 00:12:17.411 00:12:17.411 ' 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.411 --rc genhtml_branch_coverage=1 00:12:17.411 --rc genhtml_function_coverage=1 00:12:17.411 --rc genhtml_legend=1 00:12:17.411 --rc geninfo_all_blocks=1 00:12:17.411 --rc geninfo_unexecuted_blocks=1 00:12:17.411 00:12:17.411 ' 00:12:17.411 13:43:48 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:17.411 13:43:48 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:17.411 13:43:48 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:17.411 13:43:48 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3848584 00:12:17.411 13:43:48 accel -- accel/accel.sh@63 -- # waitforlisten 3848584 00:12:17.411 13:43:48 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@835 -- # '[' -z 3848584 ']' 00:12:17.411 13:43:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.411 13:43:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.411 13:43:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.411 13:43:48 accel -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:17.411 13:43:48 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:17.411 13:43:48 accel -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:17.411 13:43:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.411 13:43:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.411 13:43:48 accel -- accel/accel.sh@41 -- # jq -r . 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.411 13:43:48 accel -- common/autotest_common.sh@10 -- # set +x 00:12:17.411 [2024-12-05 13:43:48.763380] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:17.411 [2024-12-05 13:43:48.763461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848584 ] 00:12:17.411 [2024-12-05 13:43:48.874750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.411 [2024-12-05 13:43:48.932108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.669 [2024-12-05 13:43:48.936578] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:19.048 [2024-12-05 13:43:50.459877] 'OCF_Core' volume operations registered 00:12:19.048 [2024-12-05 13:43:50.459917] 'OCF_Cache' volume operations registered 00:12:19.048 [2024-12-05 13:43:50.464091] 'OCF Composite' volume operations registered 00:12:19.048 [2024-12-05 13:43:50.468248] 'SPDK_block_device' volume operations registered 00:12:19.307 13:43:50 accel -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.307 13:43:50 accel -- common/autotest_common.sh@868 -- # return 0 00:12:19.307 13:43:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:19.307 13:43:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:19.307 13:43:50 accel -- accel/accel.sh@67 -- # [[ 1 -gt 0 ]] 00:12:19.307 13:43:50 accel -- accel/accel.sh@67 -- # check_save_config ioat_scan_accel_module 00:12:19.307 13:43:50 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:12:19.307 13:43:50 accel -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.307 13:43:50 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:12:19.307 13:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:12:19.307 13:43:50 accel -- accel/accel.sh@56 -- # grep ioat_scan_accel_module 00:12:19.567 13:43:50 accel -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.567 "method": "ioat_scan_accel_module" 00:12:19.567 13:43:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:19.567 13:43:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:19.567 13:43:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:19.567 13:43:50 accel -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.567 13:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:12:19.567 13:43:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:19.567 13:43:50 accel -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=ioat 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=ioat 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:51 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:51 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:19.567 13:43:51 accel -- accel/accel.sh@72 -- # IFS== 00:12:19.567 13:43:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:19.567 13:43:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:19.567 13:43:51 accel -- accel/accel.sh@75 -- # killprocess 3848584 00:12:19.567 13:43:51 accel -- common/autotest_common.sh@954 -- # '[' -z 3848584 ']' 00:12:19.567 13:43:51 accel -- common/autotest_common.sh@958 -- # kill -0 3848584 00:12:19.567 13:43:51 accel -- common/autotest_common.sh@959 -- # uname 00:12:19.567 13:43:51 accel -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.568 13:43:51 accel -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848584 00:12:19.568 13:43:51 accel -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.568 13:43:51 accel -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.568 13:43:51 accel -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848584' 00:12:19.568 killing process with pid 3848584 00:12:19.568 13:43:51 accel -- common/autotest_common.sh@973 -- # kill 3848584 00:12:19.568 13:43:51 accel -- common/autotest_common.sh@978 -- # wait 3848584 00:12:20.506 13:43:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:20.506 13:43:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:20.506 13:43:51 accel -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.506 13:43:51 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.506 13:43:51 accel -- common/autotest_common.sh@10 -- # set +x 00:12:20.765 13:43:52 accel.accel_help -- common/autotest_common.sh@1129 -- # accel_perf -h 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:20.765 13:43:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:20.765 13:43:52 accel.accel_help -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.765 13:43:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:20.765 13:43:52 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:20.765 13:43:52 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:20.765 13:43:52 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.765 13:43:52 accel -- common/autotest_common.sh@10 -- # set +x 00:12:20.765 ************************************ 00:12:20.765 START TEST accel_missing_filename 00:12:20.765 ************************************ 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w compress 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # local es=0 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # type -t accel_perf 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.765 13:43:52 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w compress 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:20.765 13:43:52 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:20.765 [2024-12-05 13:43:52.188272] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:20.765 [2024-12-05 13:43:52.188341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849067 ] 00:12:21.024 [2024-12-05 13:43:52.312437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.024 [2024-12-05 13:43:52.369535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.024 [2024-12-05 13:43:52.373988] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:21.592 [2024-12-05 13:43:52.978964] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:21.592 [2024-12-05 13:43:53.086762] accel_perf.c:1546:main: *ERROR*: ERROR starting application 00:12:21.851 A filename is required. 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # es=234 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@664 -- # es=106 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@665 -- # case "$es" in 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@672 -- # es=1 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.851 00:12:21.851 real 0m0.988s 00:12:21.851 user 0m0.582s 00:12:21.851 sys 0m0.258s 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.851 13:43:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:21.851 ************************************ 00:12:21.851 END TEST accel_missing_filename 00:12:21.851 ************************************ 00:12:21.851 13:43:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:21.851 13:43:53 accel -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:21.851 13:43:53 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.851 13:43:53 accel -- common/autotest_common.sh@10 -- # set +x 00:12:21.851 ************************************ 00:12:21.851 START TEST accel_compress_verify 00:12:21.851 ************************************ 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # local es=0 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # type -t accel_perf 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.851 13:43:53 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:21.851 13:43:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:21.851 [2024-12-05 13:43:53.265796] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:21.851 [2024-12-05 13:43:53.265877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849273 ] 00:12:22.110 [2024-12-05 13:43:53.388741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.110 [2024-12-05 13:43:53.447556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.110 [2024-12-05 13:43:53.452035] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:22.679 [2024-12-05 13:43:54.074337] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.679 [2024-12-05 13:43:54.200859] accel_perf.c:1546:main: *ERROR*: ERROR starting application 00:12:22.940 00:12:22.940 Compression does not support the verify option, aborting. 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # es=161 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@664 -- # es=33 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@665 -- # case "$es" in 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@672 -- # es=1 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.940 00:12:22.940 real 0m1.027s 00:12:22.940 user 0m0.607s 00:12:22.940 sys 0m0.276s 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.940 13:43:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 ************************************ 00:12:22.940 END TEST accel_compress_verify 00:12:22.940 ************************************ 00:12:22.940 13:43:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:22.940 13:43:54 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:22.940 13:43:54 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.940 13:43:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 ************************************ 00:12:22.940 START TEST accel_wrong_workload 00:12:22.940 ************************************ 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w foobar 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # local es=0 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # type -t accel_perf 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w foobar 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:22.940 13:43:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:22.940 Unsupported workload type: foobar 00:12:22.940 [2024-12-05 13:43:54.375950] app.c:1466:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:22.940 accel_perf options: 00:12:22.940 [-h help message] 00:12:22.940 [-q queue depth per core] 00:12:22.940 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:22.940 [-T number of threads per core 00:12:22.940 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:22.940 [-t time in seconds] 00:12:22.940 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:22.940 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy, dix_generate, dix_verify 00:12:22.940 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:22.940 [-l for compress/decompress workloads, name of uncompressed input file 00:12:22.940 [-S for crc32c workload, use this seed value (default 0) 00:12:22.940 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:22.940 [-f for fill workload, use this BYTE value (default 255) 00:12:22.940 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:22.940 [-y verify result if this switch is on] 00:12:22.940 [-a tasks to allocate per core (default: same value as -q)] 00:12:22.940 Can be used to spread operations across a wider range of memory. 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@655 -- # es=1 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.940 00:12:22.940 real 0m0.039s 00:12:22.940 user 0m0.022s 00:12:22.940 sys 0m0.017s 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.940 13:43:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 ************************************ 00:12:22.940 END TEST accel_wrong_workload 00:12:22.940 ************************************ 00:12:22.940 Error: writing output failed: Broken pipe 00:12:22.940 13:43:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:22.940 13:43:54 accel -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:22.940 13:43:54 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.940 13:43:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:23.200 ************************************ 00:12:23.200 START TEST accel_negative_buffers 00:12:23.200 ************************************ 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # local es=0 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # type -t accel_perf 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w xor -y -x -1 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:23.200 13:43:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:23.200 -x option must be non-negative. 00:12:23.200 [2024-12-05 13:43:54.497433] app.c:1466:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:23.200 accel_perf options: 00:12:23.200 [-h help message] 00:12:23.200 [-q queue depth per core] 00:12:23.200 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:23.200 [-T number of threads per core 00:12:23.200 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:23.200 [-t time in seconds] 00:12:23.200 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:23.200 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy, dix_generate, dix_verify 00:12:23.200 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:23.200 [-l for compress/decompress workloads, name of uncompressed input file 00:12:23.200 [-S for crc32c workload, use this seed value (default 0) 00:12:23.200 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:23.200 [-f for fill workload, use this BYTE value (default 255) 00:12:23.200 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:23.200 [-y verify result if this switch is on] 00:12:23.200 [-a tasks to allocate per core (default: same value as -q)] 00:12:23.200 Can be used to spread operations across a wider range of memory. 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@655 -- # es=1 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.200 00:12:23.200 real 0m0.039s 00:12:23.200 user 0m0.017s 00:12:23.200 sys 0m0.021s 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.200 13:43:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:23.200 ************************************ 00:12:23.200 END TEST accel_negative_buffers 00:12:23.200 ************************************ 00:12:23.200 Error: writing output failed: Broken pipe 00:12:23.200 13:43:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:23.200 13:43:54 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:23.200 13:43:54 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.200 13:43:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:23.200 ************************************ 00:12:23.200 START TEST accel_crc32c 00:12:23.201 ************************************ 00:12:23.201 13:43:54 accel.accel_crc32c -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:23.201 13:43:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:23.201 [2024-12-05 13:43:54.620852] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:23.201 [2024-12-05 13:43:54.620920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849520 ] 00:12:23.460 [2024-12-05 13:43:54.747248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.460 [2024-12-05 13:43:54.819161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.460 [2024-12-05 13:43:54.823759] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:24.027 13:43:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.405 13:43:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:25.405 13:43:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:25.405 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:25.405 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.405 13:43:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:25.406 13:43:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:25.406 00:12:25.406 real 0m2.050s 00:12:25.406 user 0m0.007s 00:12:25.406 sys 0m0.003s 00:12:25.406 13:43:56 accel.accel_crc32c -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.406 13:43:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:25.406 ************************************ 00:12:25.406 END TEST accel_crc32c 00:12:25.406 ************************************ 00:12:25.406 13:43:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:25.406 13:43:56 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:25.406 13:43:56 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.406 13:43:56 accel -- common/autotest_common.sh@10 -- # set +x 00:12:25.406 ************************************ 00:12:25.406 START TEST accel_crc32c_C2 00:12:25.406 ************************************ 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:25.406 13:43:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:25.406 [2024-12-05 13:43:56.738692] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:25.406 [2024-12-05 13:43:56.738752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849790 ] 00:12:25.406 [2024-12-05 13:43:56.861464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.406 [2024-12-05 13:43:56.916916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.406 [2024-12-05 13:43:56.921383] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:26.346 13:43:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:27.285 00:12:27.285 real 0m1.979s 00:12:27.285 user 0m0.007s 00:12:27.285 sys 0m0.001s 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.285 13:43:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:27.285 ************************************ 00:12:27.285 END TEST accel_crc32c_C2 00:12:27.285 ************************************ 00:12:27.285 13:43:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:27.285 13:43:58 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:27.285 13:43:58 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.285 13:43:58 accel -- common/autotest_common.sh@10 -- # set +x 00:12:27.285 ************************************ 00:12:27.285 START TEST accel_copy 00:12:27.285 ************************************ 00:12:27.285 13:43:58 accel.accel_copy -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w copy -y 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:27.285 13:43:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:27.285 [2024-12-05 13:43:58.797567] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:27.285 [2024-12-05 13:43:58.797641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850116 ] 00:12:27.545 [2024-12-05 13:43:58.906256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.545 [2024-12-05 13:43:58.965438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.545 [2024-12-05 13:43:58.969927] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=ioat 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=ioat 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.114 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:28.115 13:43:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.492 13:44:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n ioat ]] 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:29.493 13:44:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ ioat == \i\o\a\t ]] 00:12:29.493 00:12:29.493 real 0m1.987s 00:12:29.493 user 0m0.008s 00:12:29.493 sys 0m0.002s 00:12:29.493 13:44:00 accel.accel_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.493 13:44:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:29.493 ************************************ 00:12:29.493 END TEST accel_copy 00:12:29.493 ************************************ 00:12:29.493 13:44:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.493 13:44:00 accel -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:29.493 13:44:00 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.493 13:44:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.493 ************************************ 00:12:29.493 START TEST accel_fill 00:12:29.493 ************************************ 00:12:29.493 13:44:00 accel.accel_fill -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:29.493 13:44:00 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:29.493 [2024-12-05 13:44:00.854672] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:29.493 [2024-12-05 13:44:00.854731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850402 ] 00:12:29.493 [2024-12-05 13:44:00.976313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.750 [2024-12-05 13:44:01.030384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.750 [2024-12-05 13:44:01.034825] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=ioat 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=ioat 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:30.317 13:44:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:31.691 13:44:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n ioat ]] 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:31.692 13:44:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ ioat == \i\o\a\t ]] 00:12:31.692 00:12:31.692 real 0m1.990s 00:12:31.692 user 0m0.004s 00:12:31.692 sys 0m0.004s 00:12:31.692 13:44:02 accel.accel_fill -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.692 13:44:02 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:31.692 ************************************ 00:12:31.692 END TEST accel_fill 00:12:31.692 ************************************ 00:12:31.692 13:44:02 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:31.692 13:44:02 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:31.692 13:44:02 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.692 13:44:02 accel -- common/autotest_common.sh@10 -- # set +x 00:12:31.692 ************************************ 00:12:31.692 START TEST accel_copy_crc32c 00:12:31.692 ************************************ 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w copy_crc32c -y 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:31.692 13:44:02 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:31.692 [2024-12-05 13:44:02.888125] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:31.692 [2024-12-05 13:44:02.888166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850692 ] 00:12:31.692 [2024-12-05 13:44:02.995712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.692 [2024-12-05 13:44:03.050695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.692 [2024-12-05 13:44:03.055171] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:32.261 13:44:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.642 00:12:33.642 real 0m1.958s 00:12:33.642 user 0m0.008s 00:12:33.642 sys 0m0.001s 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.642 13:44:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:33.642 ************************************ 00:12:33.642 END TEST accel_copy_crc32c 00:12:33.642 ************************************ 00:12:33.642 13:44:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:33.642 13:44:04 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:33.642 13:44:04 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.642 13:44:04 accel -- common/autotest_common.sh@10 -- # set +x 00:12:33.642 ************************************ 00:12:33.642 START TEST accel_copy_crc32c_C2 00:12:33.642 ************************************ 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:33.642 13:44:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:33.642 [2024-12-05 13:44:04.934017] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:33.642 [2024-12-05 13:44:04.934075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851033 ] 00:12:33.642 [2024-12-05 13:44:05.054788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.642 [2024-12-05 13:44:05.110046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.642 [2024-12-05 13:44:05.114508] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:34.225 13:44:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.604 00:12:35.604 real 0m1.993s 00:12:35.604 user 0m0.008s 00:12:35.604 sys 0m0.001s 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.604 13:44:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:35.604 ************************************ 00:12:35.604 END TEST accel_copy_crc32c_C2 00:12:35.604 ************************************ 00:12:35.604 13:44:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:35.604 13:44:06 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:35.604 13:44:06 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.604 13:44:06 accel -- common/autotest_common.sh@10 -- # set +x 00:12:35.604 ************************************ 00:12:35.604 START TEST accel_dualcast 00:12:35.604 ************************************ 00:12:35.604 13:44:06 accel.accel_dualcast -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dualcast -y 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:35.604 13:44:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:35.604 [2024-12-05 13:44:06.992453] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:35.604 [2024-12-05 13:44:06.992512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851282 ] 00:12:35.604 [2024-12-05 13:44:07.112907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.863 [2024-12-05 13:44:07.166774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.863 [2024-12-05 13:44:07.171214] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.436 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:36.437 13:44:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:37.461 13:44:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.461 00:12:37.461 real 0m1.993s 00:12:37.461 user 0m0.008s 00:12:37.461 sys 0m0.001s 00:12:37.461 13:44:08 accel.accel_dualcast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.461 13:44:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:37.461 ************************************ 00:12:37.461 END TEST accel_dualcast 00:12:37.461 ************************************ 00:12:37.720 13:44:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:37.720 13:44:08 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:37.720 13:44:08 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.720 13:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:12:37.720 ************************************ 00:12:37.720 START TEST accel_compare 00:12:37.720 ************************************ 00:12:37.720 13:44:09 accel.accel_compare -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w compare -y 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.720 13:44:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.721 13:44:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:37.721 13:44:09 accel.accel_compare -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:37.721 13:44:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.721 13:44:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:37.721 13:44:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:37.721 [2024-12-05 13:44:09.056811] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:37.721 [2024-12-05 13:44:09.056873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851649 ] 00:12:37.721 [2024-12-05 13:44:09.180798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.721 [2024-12-05 13:44:09.235846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.721 [2024-12-05 13:44:09.240329] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:38.659 13:44:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:39.597 13:44:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:39.597 00:12:39.597 real 0m2.003s 00:12:39.597 user 0m0.008s 00:12:39.597 sys 0m0.000s 00:12:39.597 13:44:11 accel.accel_compare -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.597 13:44:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:39.597 ************************************ 00:12:39.597 END TEST accel_compare 00:12:39.597 ************************************ 00:12:39.597 13:44:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:39.597 13:44:11 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:39.597 13:44:11 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.597 13:44:11 accel -- common/autotest_common.sh@10 -- # set +x 00:12:39.597 ************************************ 00:12:39.597 START TEST accel_xor 00:12:39.597 ************************************ 00:12:39.597 13:44:11 accel.accel_xor -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w xor -y 00:12:39.597 13:44:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:39.597 13:44:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:39.598 13:44:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:39.598 [2024-12-05 13:44:11.109022] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:39.598 [2024-12-05 13:44:11.109064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851863 ] 00:12:39.857 [2024-12-05 13:44:11.215965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.857 [2024-12-05 13:44:11.271228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.857 [2024-12-05 13:44:11.275709] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:40.427 13:44:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:41.807 13:44:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:41.807 00:12:41.807 real 0m1.959s 00:12:41.807 user 0m0.007s 00:12:41.807 sys 0m0.001s 00:12:41.807 13:44:13 accel.accel_xor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.807 13:44:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:41.808 ************************************ 00:12:41.808 END TEST accel_xor 00:12:41.808 ************************************ 00:12:41.808 13:44:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:41.808 13:44:13 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:41.808 13:44:13 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.808 13:44:13 accel -- common/autotest_common.sh@10 -- # set +x 00:12:41.808 ************************************ 00:12:41.808 START TEST accel_xor 00:12:41.808 ************************************ 00:12:41.808 13:44:13 accel.accel_xor -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w xor -y -x 3 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:41.808 13:44:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:41.808 [2024-12-05 13:44:13.154143] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:41.808 [2024-12-05 13:44:13.154201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852235 ] 00:12:41.808 [2024-12-05 13:44:13.274636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.808 [2024-12-05 13:44:13.328069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.068 [2024-12-05 13:44:13.332514] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:42.638 13:44:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.018 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:44.019 13:44:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:44.019 00:12:44.019 real 0m1.995s 00:12:44.019 user 0m0.007s 00:12:44.019 sys 0m0.002s 00:12:44.019 13:44:15 accel.accel_xor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.019 13:44:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:44.019 ************************************ 00:12:44.019 END TEST accel_xor 00:12:44.019 ************************************ 00:12:44.019 13:44:15 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:44.019 13:44:15 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:44.019 13:44:15 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.019 13:44:15 accel -- common/autotest_common.sh@10 -- # set +x 00:12:44.019 ************************************ 00:12:44.019 START TEST accel_dif_verify 00:12:44.019 ************************************ 00:12:44.019 13:44:15 accel.accel_dif_verify -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_verify 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:44.019 13:44:15 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:44.019 [2024-12-05 13:44:15.210899] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:44.019 [2024-12-05 13:44:15.210957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852450 ] 00:12:44.019 [2024-12-05 13:44:15.333406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.019 [2024-12-05 13:44:15.388526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.019 [2024-12-05 13:44:15.393002] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:44.585 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.585 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.585 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.585 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.585 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:44.586 13:44:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:45.962 13:44:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:45.962 00:12:45.962 real 0m1.965s 00:12:45.962 user 0m0.008s 00:12:45.963 sys 0m0.001s 00:12:45.963 13:44:17 accel.accel_dif_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.963 13:44:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:45.963 ************************************ 00:12:45.963 END TEST accel_dif_verify 00:12:45.963 ************************************ 00:12:45.963 13:44:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:45.963 13:44:17 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:45.963 13:44:17 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.963 13:44:17 accel -- common/autotest_common.sh@10 -- # set +x 00:12:45.963 ************************************ 00:12:45.963 START TEST accel_dif_generate 00:12:45.963 ************************************ 00:12:45.963 13:44:17 accel.accel_dif_generate -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_generate 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:45.963 13:44:17 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:45.963 [2024-12-05 13:44:17.235126] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:45.963 [2024-12-05 13:44:17.235169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852827 ] 00:12:45.963 [2024-12-05 13:44:17.340978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.963 [2024-12-05 13:44:17.397621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.963 [2024-12-05 13:44:17.402099] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:46.529 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:46.530 13:44:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:47.904 13:44:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:47.904 00:12:47.904 real 0m1.972s 00:12:47.904 user 0m0.009s 00:12:47.904 sys 0m0.000s 00:12:47.904 13:44:19 accel.accel_dif_generate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.904 13:44:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 ************************************ 00:12:47.904 END TEST accel_dif_generate 00:12:47.904 ************************************ 00:12:47.904 13:44:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:47.904 13:44:19 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:47.904 13:44:19 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.904 13:44:19 accel -- common/autotest_common.sh@10 -- # set +x 00:12:47.905 ************************************ 00:12:47.905 START TEST accel_dif_generate_copy 00:12:47.905 ************************************ 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_generate_copy 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:47.905 13:44:19 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:47.905 [2024-12-05 13:44:19.279420] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:47.905 [2024-12-05 13:44:19.279481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853031 ] 00:12:47.905 [2024-12-05 13:44:19.399764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.163 [2024-12-05 13:44:19.453862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.163 [2024-12-05 13:44:19.458308] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.728 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:48.729 13:44:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.104 00:12:50.104 real 0m1.983s 00:12:50.104 user 0m0.014s 00:12:50.104 sys 0m0.001s 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.104 13:44:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 ************************************ 00:12:50.104 END TEST accel_dif_generate_copy 00:12:50.104 ************************************ 00:12:50.104 13:44:21 accel -- accel/accel.sh@114 -- # run_test accel_dix_verify accel_test -t 1 -w dix_verify 00:12:50.104 13:44:21 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:50.104 13:44:21 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.105 13:44:21 accel -- common/autotest_common.sh@10 -- # set +x 00:12:50.105 ************************************ 00:12:50.105 START TEST accel_dix_verify 00:12:50.105 ************************************ 00:12:50.105 13:44:21 accel.accel_dix_verify -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dix_verify 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@17 -- # local accel_module 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dix_verify 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dix_verify 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:50.105 13:44:21 accel.accel_dix_verify -- accel/accel.sh@41 -- # jq -r . 00:12:50.105 [2024-12-05 13:44:21.331160] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:50.105 [2024-12-05 13:44:21.331223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853418 ] 00:12:50.105 [2024-12-05 13:44:21.452898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.105 [2024-12-05 13:44:21.507867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.105 [2024-12-05 13:44:21.512355] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=0x1 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=dix_verify 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@23 -- # accel_opc=dix_verify 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=software 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=32 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=32 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=1 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=No 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:50.673 13:44:22 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ -n dix_verify ]] 00:12:52.048 13:44:23 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:52.048 00:12:52.048 real 0m1.984s 00:12:52.048 user 0m0.009s 00:12:52.048 sys 0m0.000s 00:12:52.048 13:44:23 accel.accel_dix_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.048 13:44:23 accel.accel_dix_verify -- common/autotest_common.sh@10 -- # set +x 00:12:52.048 ************************************ 00:12:52.048 END TEST accel_dix_verify 00:12:52.048 ************************************ 00:12:52.048 13:44:23 accel -- accel/accel.sh@115 -- # run_test accel_dix_generate accel_test -t 1 -w dif_generate 00:12:52.048 13:44:23 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:52.048 13:44:23 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.048 13:44:23 accel -- common/autotest_common.sh@10 -- # set +x 00:12:52.048 ************************************ 00:12:52.048 START TEST accel_dix_generate 00:12:52.048 ************************************ 00:12:52.048 13:44:23 accel.accel_dix_generate -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_generate 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@17 -- # local accel_module 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:52.048 13:44:23 accel.accel_dix_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:52.049 13:44:23 accel.accel_dix_generate -- accel/accel.sh@41 -- # jq -r . 00:12:52.049 [2024-12-05 13:44:23.398335] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:52.049 [2024-12-05 13:44:23.398417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853638 ] 00:12:52.049 [2024-12-05 13:44:23.518983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.307 [2024-12-05 13:44:23.575285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.307 [2024-12-05 13:44:23.579760] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=0x1 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.875 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=software 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=32 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=32 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=1 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=No 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:52.876 13:44:24 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:54.251 13:44:25 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:54.251 00:12:54.251 real 0m1.981s 00:12:54.251 user 0m0.010s 00:12:54.251 sys 0m0.001s 00:12:54.251 13:44:25 accel.accel_dix_generate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.251 13:44:25 accel.accel_dix_generate -- common/autotest_common.sh@10 -- # set +x 00:12:54.251 ************************************ 00:12:54.251 END TEST accel_dix_generate 00:12:54.251 ************************************ 00:12:54.251 13:44:25 accel -- accel/accel.sh@117 -- # [[ y == y ]] 00:12:54.251 13:44:25 accel -- accel/accel.sh@118 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:54.251 13:44:25 accel -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:54.251 13:44:25 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.251 13:44:25 accel -- common/autotest_common.sh@10 -- # set +x 00:12:54.251 ************************************ 00:12:54.251 START TEST accel_comp 00:12:54.251 ************************************ 00:12:54.251 13:44:25 accel.accel_comp -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:54.251 13:44:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:54.251 [2024-12-05 13:44:25.439539] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:54.251 [2024-12-05 13:44:25.439598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853998 ] 00:12:54.251 [2024-12-05 13:44:25.561809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.251 [2024-12-05 13:44:25.614935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.251 [2024-12-05 13:44:25.619378] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.820 13:44:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.196 13:44:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.196 13:44:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.196 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.196 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:56.197 13:44:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.197 00:12:56.197 real 0m1.996s 00:12:56.197 user 0m0.007s 00:12:56.197 sys 0m0.003s 00:12:56.197 13:44:27 accel.accel_comp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.197 13:44:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:56.197 ************************************ 00:12:56.197 END TEST accel_comp 00:12:56.197 ************************************ 00:12:56.197 13:44:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:56.197 13:44:27 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:56.197 13:44:27 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.197 13:44:27 accel -- common/autotest_common.sh@10 -- # set +x 00:12:56.197 ************************************ 00:12:56.197 START TEST accel_decomp 00:12:56.197 ************************************ 00:12:56.197 13:44:27 accel.accel_decomp -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:56.197 13:44:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:56.197 [2024-12-05 13:44:27.511543] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:56.197 [2024-12-05 13:44:27.511603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854206 ] 00:12:56.197 [2024-12-05 13:44:27.632801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.197 [2024-12-05 13:44:27.688518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.197 [2024-12-05 13:44:27.692995] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:57.153 13:44:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:58.086 13:44:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:58.087 13:44:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:58.087 13:44:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:58.087 13:44:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:58.087 00:12:58.087 real 0m1.991s 00:12:58.087 user 0m0.006s 00:12:58.087 sys 0m0.003s 00:12:58.087 13:44:29 accel.accel_decomp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.087 13:44:29 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:58.087 ************************************ 00:12:58.087 END TEST accel_decomp 00:12:58.087 ************************************ 00:12:58.087 13:44:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:58.087 13:44:29 accel -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:12:58.087 13:44:29 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.087 13:44:29 accel -- common/autotest_common.sh@10 -- # set +x 00:12:58.087 ************************************ 00:12:58.087 START TEST accel_decomp_full 00:12:58.087 ************************************ 00:12:58.087 13:44:29 accel.accel_decomp_full -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:58.087 13:44:29 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:58.087 [2024-12-05 13:44:29.575274] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:12:58.087 [2024-12-05 13:44:29.575333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854569 ] 00:12:58.345 [2024-12-05 13:44:29.697208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.345 [2024-12-05 13:44:29.752460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.345 [2024-12-05 13:44:29.756927] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.914 13:44:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:00.294 13:44:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:00.294 00:13:00.294 real 0m2.008s 00:13:00.294 user 0m0.008s 00:13:00.294 sys 0m0.001s 00:13:00.294 13:44:31 accel.accel_decomp_full -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.294 13:44:31 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:13:00.294 ************************************ 00:13:00.294 END TEST accel_decomp_full 00:13:00.295 ************************************ 00:13:00.295 13:44:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:00.295 13:44:31 accel -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:00.295 13:44:31 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.295 13:44:31 accel -- common/autotest_common.sh@10 -- # set +x 00:13:00.295 ************************************ 00:13:00.295 START TEST accel_decomp_mcore 00:13:00.295 ************************************ 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:00.295 13:44:31 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:00.295 [2024-12-05 13:44:31.666312] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:00.295 [2024-12-05 13:44:31.666389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854820 ] 00:13:00.295 [2024-12-05 13:44:31.777887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.554 [2024-12-05 13:44:31.837390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.554 [2024-12-05 13:44:31.837479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.554 [2024-12-05 13:44:31.837569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.554 [2024-12-05 13:44:31.837573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.554 [2024-12-05 13:44:31.842062] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.122 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:01.123 13:44:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.502 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.503 00:13:02.503 real 0m1.981s 00:13:02.503 user 0m0.010s 00:13:02.503 sys 0m0.002s 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.503 13:44:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:02.503 ************************************ 00:13:02.503 END TEST accel_decomp_mcore 00:13:02.503 ************************************ 00:13:02.503 13:44:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:02.503 13:44:33 accel -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:02.503 13:44:33 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.503 13:44:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:02.503 ************************************ 00:13:02.503 START TEST accel_decomp_full_mcore 00:13:02.503 ************************************ 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:02.503 13:44:33 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:02.503 [2024-12-05 13:44:33.700812] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:02.503 [2024-12-05 13:44:33.700857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855151 ] 00:13:02.503 [2024-12-05 13:44:33.806034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.503 [2024-12-05 13:44:33.866587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.503 [2024-12-05 13:44:33.866678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.503 [2024-12-05 13:44:33.866719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.503 [2024-12-05 13:44:33.866714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.503 [2024-12-05 13:44:33.871235] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.072 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:03.073 13:44:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.447 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:04.448 00:13:04.448 real 0m1.945s 00:13:04.448 user 0m0.009s 00:13:04.448 sys 0m0.002s 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.448 13:44:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:04.448 ************************************ 00:13:04.448 END TEST accel_decomp_full_mcore 00:13:04.448 ************************************ 00:13:04.448 13:44:35 accel -- accel/accel.sh@123 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:04.448 13:44:35 accel -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:04.448 13:44:35 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.448 13:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:13:04.448 ************************************ 00:13:04.448 START TEST accel_decomp_mthread 00:13:04.448 ************************************ 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:04.448 13:44:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:04.448 [2024-12-05 13:44:35.716032] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:04.448 [2024-12-05 13:44:35.716079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855406 ] 00:13:04.448 [2024-12-05 13:44:35.822285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.448 [2024-12-05 13:44:35.877731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.448 [2024-12-05 13:44:35.882203] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.056 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.057 13:44:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.431 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.431 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.431 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.431 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:06.432 00:13:06.432 real 0m1.988s 00:13:06.432 user 0m0.008s 00:13:06.432 sys 0m0.002s 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.432 13:44:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:06.432 ************************************ 00:13:06.432 END TEST accel_decomp_mthread 00:13:06.432 ************************************ 00:13:06.432 13:44:37 accel -- accel/accel.sh@124 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:06.432 13:44:37 accel -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:06.432 13:44:37 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.432 13:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:13:06.432 ************************************ 00:13:06.432 START TEST accel_decomp_full_mthread 00:13:06.432 ************************************ 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:06.432 13:44:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:06.432 [2024-12-05 13:44:37.794229] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:06.432 [2024-12-05 13:44:37.794290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855716 ] 00:13:06.432 [2024-12-05 13:44:37.916109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.691 [2024-12-05 13:44:37.971879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.691 [2024-12-05 13:44:37.976321] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:07.258 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.258 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.258 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.258 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.258 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:07.259 13:44:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.639 00:13:08.639 real 0m2.025s 00:13:08.639 user 0m0.008s 00:13:08.639 sys 0m0.001s 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.639 13:44:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:08.639 ************************************ 00:13:08.639 END TEST accel_decomp_full_mthread 00:13:08.639 ************************************ 00:13:08.639 13:44:39 accel -- accel/accel.sh@126 -- # [[ n == y ]] 00:13:08.639 13:44:39 accel -- accel/accel.sh@139 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:08.639 13:44:39 accel -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.639 13:44:39 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.639 13:44:39 accel -- common/autotest_common.sh@10 -- # set +x 00:13:08.639 13:44:39 accel -- accel/accel.sh@139 -- # build_accel_config 00:13:08.639 13:44:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.639 13:44:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.639 13:44:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.639 13:44:39 accel -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:08.639 13:44:39 accel -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:08.639 13:44:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.639 13:44:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:08.639 13:44:39 accel -- accel/accel.sh@41 -- # jq -r . 00:13:08.639 ************************************ 00:13:08.639 START TEST accel_dif_functional_tests 00:13:08.639 ************************************ 00:13:08.639 13:44:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:08.639 [2024-12-05 13:44:39.918682] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:08.639 [2024-12-05 13:44:39.918744] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856019 ] 00:13:08.639 [2024-12-05 13:44:40.042072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.639 [2024-12-05 13:44:40.102464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.639 [2024-12-05 13:44:40.102492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.639 [2024-12-05 13:44:40.102496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.639 [2024-12-05 13:44:40.107056] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:10.542 [2024-12-05 13:44:41.549821] 'OCF_Core' volume operations registered 00:13:10.542 [2024-12-05 13:44:41.549867] 'OCF_Cache' volume operations registered 00:13:10.542 [2024-12-05 13:44:41.554007] 'OCF Composite' volume operations registered 00:13:10.542 [2024-12-05 13:44:41.558179] 'SPDK_block_device' volume operations registered 00:13:10.542 00:13:10.542 00:13:10.542 CUnit - A unit testing framework for C - Version 2.1-3 00:13:10.542 http://cunit.sourceforge.net/ 00:13:10.542 00:13:10.542 00:13:10.542 Suite: accel_dif 00:13:10.542 Test: verify: DIF generated, GUARD check ...passed 00:13:10.542 Test: verify: DIX generated, GUARD check ...passed 00:13:10.542 Test: verify: DIF generated, APPTAG check ...passed 00:13:10.542 Test: verify: DIX generated, APPTAG check ...passed 00:13:10.542 Test: verify: DIF generated, REFTAG check ...passed 00:13:10.542 Test: verify: DIX generated, REFTAG check ...passed 00:13:10.542 Test: verify: DIX generated, all flags check ...passed 00:13:10.542 Test: verify: DIF not generated, GUARD check ...[2024-12-05 13:44:41.562008] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:10.542 passed 00:13:10.542 Test: verify: DIX not generated, GUARD check ...[2024-12-05 13:44:41.562080] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:13:10.542 passed 00:13:10.542 Test: verify: DIF not generated, APPTAG check ...[2024-12-05 13:44:41.562108] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:10.542 passed 00:13:10.542 Test: verify: DIX not generated, APPTAG check ...[2024-12-05 13:44:41.562145] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=0 00:13:10.542 passed 00:13:10.543 Test: verify: DIF not generated, REFTAG check ...[2024-12-05 13:44:41.562171] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:10.543 passed 00:13:10.543 Test: verify: DIX not generated, REFTAG check ...[2024-12-05 13:44:41.562208] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=0 00:13:10.543 passed 00:13:10.543 Test: verify: DIX not generated, all flags check ...[2024-12-05 13:44:41.562246] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:13:10.543 passed 00:13:10.543 Test: verify: DIX guard not generated, all flags check ...[2024-12-05 13:44:41.562283] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:13:10.543 passed 00:13:10.543 Test: verify: DIX apptag not generated, all flags check ...[2024-12-05 13:44:41.562324] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=0 00:13:10.543 passed 00:13:10.543 Test: verify: DIX reftag not generated, all flags check ...[2024-12-05 13:44:41.562364] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=0 00:13:10.543 passed 00:13:10.543 Test: verify: DIF APPTAG correct, APPTAG check ...passed 00:13:10.543 Test: verify: DIX APPTAG correct, APPTAG check ...passed 00:13:10.543 Test: verify: DIF APPTAG incorrect, APPTAG check ...[2024-12-05 13:44:41.562462] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:10.543 passed 00:13:10.543 Test: verify: DIX APPTAG incorrect, APPTAG check ...[2024-12-05 13:44:41.562499] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:10.543 passed 00:13:10.543 Test: verify: DIF APPTAG incorrect, no APPTAG check ...passed 00:13:10.543 Test: verify: DIX APPTAG incorrect, no APPTAG check ...passed 00:13:10.543 Test: verify: DIF REFTAG incorrect, REFTAG ignore ...passed 00:13:10.543 Test: verify: DIX REFTAG incorrect, REFTAG ignore ...passed 00:13:10.543 Test: verify: DIF REFTAG_INIT correct, REFTAG check ...passed 00:13:10.543 Test: verify: DIX REFTAG_INIT correct, REFTAG check ...passed 00:13:10.543 Test: verify: DIF REFTAG_INIT incorrect, REFTAG check ...[2024-12-05 13:44:41.562771] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:10.543 passed 00:13:10.543 Test: verify: DIX REFTAG_INIT incorrect, REFTAG check ...[2024-12-05 13:44:41.562820] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:10.543 passed 00:13:10.543 Test: verify copy: DIF generated, GUARD check ...passed 00:13:10.543 Test: verify copy: DIF generated, APPTAG check ...passed 00:13:10.543 Test: verify copy: DIF generated, REFTAG check ...passed 00:13:10.543 Test: verify copy: DIF not generated, GUARD check ...[2024-12-05 13:44:41.562964] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:10.543 passed 00:13:10.543 Test: verify copy: DIF not generated, APPTAG check ...[2024-12-05 13:44:41.563000] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:10.543 passed 00:13:10.543 Test: verify copy: DIF not generated, REFTAG check ...[2024-12-05 13:44:41.563029] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:10.543 passed 00:13:10.543 Test: generate copy: DIF generated, GUARD check ...passed 00:13:10.543 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:10.543 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:10.543 Test: generate: DIX generated, GUARD check ...passed 00:13:10.543 Test: generate: DIX generated, APTTAG check ...passed 00:13:10.543 Test: generate: DIX generated, REFTAG check ...passed 00:13:10.543 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:10.543 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:10.543 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:10.543 Test: generate copy: DIF iovecs-len validate ...[2024-12-05 13:44:41.563466] dif.c:1291:_spdk_dif_insert_copy: *ERROR*: Size of iovec arrays are not valid. 00:13:10.543 passed 00:13:10.543 Test: generate copy: DIF buffer alignment validate ...passed 00:13:10.543 Test: generate copy sequence: DIF generated, GUARD check ...passed 00:13:10.543 Test: generate copy sequence: DIF generated, APTTAG check ...passed 00:13:10.543 Test: generate copy sequence: DIF generated, REFTAG check ...passed 00:13:10.543 Test: verify copy sequence: DIF generated, GUARD check ...passed 00:13:10.543 Test: verify copy sequence: DIF generated, APPTAG check ...passed 00:13:10.543 Test: verify copy sequence: DIF generated, REFTAG check ...passed 00:13:10.543 00:13:10.543 Run Summary: Type Total Ran Passed Failed Inactive 00:13:10.543 suites 1 1 n/a 0 0 00:13:10.543 tests 52 52 52 0 0 00:13:10.543 asserts 259 259 259 0 n/a 00:13:10.543 00:13:10.543 Elapsed time = 0.005 seconds 00:13:10.802 00:13:10.802 real 0m2.424s 00:13:10.802 user 0m4.296s 00:13:10.802 sys 0m0.368s 00:13:10.802 13:44:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.802 13:44:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:10.802 ************************************ 00:13:10.802 END TEST accel_dif_functional_tests 00:13:10.802 ************************************ 00:13:11.061 00:13:11.061 real 0m53.844s 00:13:11.061 user 0m52.833s 00:13:11.061 sys 0m9.015s 00:13:11.061 13:44:42 accel -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.062 13:44:42 accel -- common/autotest_common.sh@10 -- # set +x 00:13:11.062 ************************************ 00:13:11.062 END TEST accel 00:13:11.062 ************************************ 00:13:11.062 13:44:42 -- spdk/autotest.sh@173 -- # run_test accel_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:11.062 13:44:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.062 13:44:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.062 13:44:42 -- common/autotest_common.sh@10 -- # set +x 00:13:11.062 ************************************ 00:13:11.062 START TEST accel_rpc 00:13:11.062 ************************************ 00:13:11.062 13:44:42 accel_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:11.062 * Looking for test storage... 00:13:11.062 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel 00:13:11.062 13:44:42 accel_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.062 13:44:42 accel_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.062 13:44:42 accel_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.062 13:44:42 accel_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@345 -- # : 1 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@353 -- # local d=1 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.062 13:44:42 accel_rpc -- scripts/common.sh@355 -- # echo 1 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@353 -- # local d=2 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@355 -- # echo 2 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.321 13:44:42 accel_rpc -- scripts/common.sh@368 -- # return 0 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.321 --rc genhtml_branch_coverage=1 00:13:11.321 --rc genhtml_function_coverage=1 00:13:11.321 --rc genhtml_legend=1 00:13:11.321 --rc geninfo_all_blocks=1 00:13:11.321 --rc geninfo_unexecuted_blocks=1 00:13:11.321 00:13:11.321 ' 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.321 --rc genhtml_branch_coverage=1 00:13:11.321 --rc genhtml_function_coverage=1 00:13:11.321 --rc genhtml_legend=1 00:13:11.321 --rc geninfo_all_blocks=1 00:13:11.321 --rc geninfo_unexecuted_blocks=1 00:13:11.321 00:13:11.321 ' 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.321 --rc genhtml_branch_coverage=1 00:13:11.321 --rc genhtml_function_coverage=1 00:13:11.321 --rc genhtml_legend=1 00:13:11.321 --rc geninfo_all_blocks=1 00:13:11.321 --rc geninfo_unexecuted_blocks=1 00:13:11.321 00:13:11.321 ' 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.321 --rc genhtml_branch_coverage=1 00:13:11.321 --rc genhtml_function_coverage=1 00:13:11.321 --rc genhtml_legend=1 00:13:11.321 --rc geninfo_all_blocks=1 00:13:11.321 --rc geninfo_unexecuted_blocks=1 00:13:11.321 00:13:11.321 ' 00:13:11.321 13:44:42 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:11.321 13:44:42 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3856430 00:13:11.321 13:44:42 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3856430 00:13:11.321 13:44:42 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@835 -- # '[' -z 3856430 ']' 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.321 13:44:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.321 [2024-12-05 13:44:42.656442] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:11.321 [2024-12-05 13:44:42.656543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856430 ] 00:13:11.321 [2024-12-05 13:44:42.772158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.321 [2024-12-05 13:44:42.828124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.258 13:44:43 accel_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.258 13:44:43 accel_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:12.258 13:44:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:12.258 13:44:43 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:12.258 13:44:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:12.258 13:44:43 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:12.258 13:44:43 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:12.258 13:44:43 accel_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.258 13:44:43 accel_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.258 13:44:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.258 ************************************ 00:13:12.258 START TEST accel_assign_opcode 00:13:12.258 ************************************ 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1129 -- # accel_assign_opcode_test_suite 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:12.258 [2024-12-05 13:44:43.590541] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.258 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:12.259 [2024-12-05 13:44:43.598548] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:12.259 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.259 13:44:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:12.259 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.259 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:12.518 [2024-12-05 13:44:43.828133] 'OCF_Core' volume operations registered 00:13:12.518 [2024-12-05 13:44:43.828172] 'OCF_Cache' volume operations registered 00:13:12.518 [2024-12-05 13:44:43.832603] 'OCF Composite' volume operations registered 00:13:12.518 [2024-12-05 13:44:43.837062] 'SPDK_block_device' volume operations registered 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.518 software 00:13:12.518 00:13:12.518 real 0m0.415s 00:13:12.518 user 0m0.031s 00:13:12.518 sys 0m0.012s 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.518 13:44:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:12.518 ************************************ 00:13:12.518 END TEST accel_assign_opcode 00:13:12.518 ************************************ 00:13:12.518 13:44:44 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3856430 00:13:12.518 13:44:44 accel_rpc -- common/autotest_common.sh@954 -- # '[' -z 3856430 ']' 00:13:12.518 13:44:44 accel_rpc -- common/autotest_common.sh@958 -- # kill -0 3856430 00:13:12.518 13:44:44 accel_rpc -- common/autotest_common.sh@959 -- # uname 00:13:12.518 13:44:44 accel_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.518 13:44:44 accel_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3856430 00:13:12.777 13:44:44 accel_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.778 13:44:44 accel_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.778 13:44:44 accel_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3856430' 00:13:12.778 killing process with pid 3856430 00:13:12.778 13:44:44 accel_rpc -- common/autotest_common.sh@973 -- # kill 3856430 00:13:12.778 13:44:44 accel_rpc -- common/autotest_common.sh@978 -- # wait 3856430 00:13:13.346 00:13:13.346 real 0m2.204s 00:13:13.346 user 0m2.130s 00:13:13.346 sys 0m0.690s 00:13:13.346 13:44:44 accel_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.346 13:44:44 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.346 ************************************ 00:13:13.346 END TEST accel_rpc 00:13:13.346 ************************************ 00:13:13.346 13:44:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh 00:13:13.346 13:44:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:13.346 13:44:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.346 13:44:44 -- common/autotest_common.sh@10 -- # set +x 00:13:13.346 ************************************ 00:13:13.346 START TEST app_cmdline 00:13:13.346 ************************************ 00:13:13.346 13:44:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh 00:13:13.346 * Looking for test storage... 00:13:13.346 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app 00:13:13.346 13:44:44 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:13.346 13:44:44 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:13:13.346 13:44:44 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:13.346 13:44:44 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.346 13:44:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.606 13:44:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:13.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.606 --rc genhtml_branch_coverage=1 00:13:13.606 --rc genhtml_function_coverage=1 00:13:13.606 --rc genhtml_legend=1 00:13:13.606 --rc geninfo_all_blocks=1 00:13:13.606 --rc geninfo_unexecuted_blocks=1 00:13:13.606 00:13:13.606 ' 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:13.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.606 --rc genhtml_branch_coverage=1 00:13:13.606 --rc genhtml_function_coverage=1 00:13:13.606 --rc genhtml_legend=1 00:13:13.606 --rc geninfo_all_blocks=1 00:13:13.606 --rc geninfo_unexecuted_blocks=1 00:13:13.606 00:13:13.606 ' 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:13.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.606 --rc genhtml_branch_coverage=1 00:13:13.606 --rc genhtml_function_coverage=1 00:13:13.606 --rc genhtml_legend=1 00:13:13.606 --rc geninfo_all_blocks=1 00:13:13.606 --rc geninfo_unexecuted_blocks=1 00:13:13.606 00:13:13.606 ' 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:13.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.606 --rc genhtml_branch_coverage=1 00:13:13.606 --rc genhtml_function_coverage=1 00:13:13.606 --rc genhtml_legend=1 00:13:13.606 --rc geninfo_all_blocks=1 00:13:13.606 --rc geninfo_unexecuted_blocks=1 00:13:13.606 00:13:13.606 ' 00:13:13.606 13:44:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:13.606 13:44:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3856780 00:13:13.606 13:44:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3856780 00:13:13.606 13:44:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3856780 ']' 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.606 13:44:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:13.606 [2024-12-05 13:44:44.944597] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:13.606 [2024-12-05 13:44:44.944684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856780 ] 00:13:13.606 [2024-12-05 13:44:45.065212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.606 [2024-12-05 13:44:45.118598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.865 [2024-12-05 13:44:45.332946] 'OCF_Core' volume operations registered 00:13:13.865 [2024-12-05 13:44:45.332990] 'OCF_Cache' volume operations registered 00:13:13.865 [2024-12-05 13:44:45.337398] 'OCF Composite' volume operations registered 00:13:13.865 [2024-12-05 13:44:45.341896] 'SPDK_block_device' volume operations registered 00:13:14.131 13:44:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.131 13:44:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:13:14.131 13:44:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:14.390 { 00:13:14.390 "version": "SPDK v25.01-pre git sha1 62083ef48", 00:13:14.390 "fields": { 00:13:14.390 "major": 25, 00:13:14.390 "minor": 1, 00:13:14.390 "patch": 0, 00:13:14.390 "suffix": "-pre", 00:13:14.390 "commit": "62083ef48" 00:13:14.390 } 00:13:14.390 } 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:14.390 13:44:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py ]] 00:13:14.390 13:44:45 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:14.649 request: 00:13:14.649 { 00:13:14.649 "method": "env_dpdk_get_mem_stats", 00:13:14.649 "req_id": 1 00:13:14.649 } 00:13:14.649 Got JSON-RPC error response 00:13:14.649 response: 00:13:14.649 { 00:13:14.649 "code": -32601, 00:13:14.649 "message": "Method not found" 00:13:14.649 } 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:14.649 13:44:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3856780 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3856780 ']' 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3856780 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.649 13:44:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3856780 00:13:14.649 13:44:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.649 13:44:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.649 13:44:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3856780' 00:13:14.649 killing process with pid 3856780 00:13:14.649 13:44:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 3856780 00:13:14.649 13:44:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 3856780 00:13:15.218 00:13:15.218 real 0m1.892s 00:13:15.218 user 0m2.025s 00:13:15.218 sys 0m0.673s 00:13:15.218 13:44:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.218 13:44:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:15.218 ************************************ 00:13:15.218 END TEST app_cmdline 00:13:15.218 ************************************ 00:13:15.218 13:44:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh 00:13:15.218 13:44:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.218 13:44:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.218 13:44:46 -- common/autotest_common.sh@10 -- # set +x 00:13:15.218 ************************************ 00:13:15.218 START TEST version 00:13:15.218 ************************************ 00:13:15.218 13:44:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh 00:13:15.218 * Looking for test storage... 00:13:15.218 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app 00:13:15.218 13:44:46 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.218 13:44:46 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.218 13:44:46 version -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.478 13:44:46 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.478 13:44:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.478 13:44:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.478 13:44:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.478 13:44:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.478 13:44:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.478 13:44:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.478 13:44:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.478 13:44:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.478 13:44:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.478 13:44:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.478 13:44:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.478 13:44:46 version -- scripts/common.sh@344 -- # case "$op" in 00:13:15.478 13:44:46 version -- scripts/common.sh@345 -- # : 1 00:13:15.478 13:44:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.478 13:44:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.478 13:44:46 version -- scripts/common.sh@365 -- # decimal 1 00:13:15.478 13:44:46 version -- scripts/common.sh@353 -- # local d=1 00:13:15.478 13:44:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.478 13:44:46 version -- scripts/common.sh@355 -- # echo 1 00:13:15.478 13:44:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.478 13:44:46 version -- scripts/common.sh@366 -- # decimal 2 00:13:15.478 13:44:46 version -- scripts/common.sh@353 -- # local d=2 00:13:15.478 13:44:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.478 13:44:46 version -- scripts/common.sh@355 -- # echo 2 00:13:15.478 13:44:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.478 13:44:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.478 13:44:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.478 13:44:46 version -- scripts/common.sh@368 -- # return 0 00:13:15.478 13:44:46 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.478 13:44:46 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.478 --rc genhtml_branch_coverage=1 00:13:15.478 --rc genhtml_function_coverage=1 00:13:15.478 --rc genhtml_legend=1 00:13:15.478 --rc geninfo_all_blocks=1 00:13:15.478 --rc geninfo_unexecuted_blocks=1 00:13:15.478 00:13:15.478 ' 00:13:15.478 13:44:46 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.478 --rc genhtml_branch_coverage=1 00:13:15.478 --rc genhtml_function_coverage=1 00:13:15.478 --rc genhtml_legend=1 00:13:15.478 --rc geninfo_all_blocks=1 00:13:15.478 --rc geninfo_unexecuted_blocks=1 00:13:15.478 00:13:15.478 ' 00:13:15.478 13:44:46 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.479 --rc genhtml_branch_coverage=1 00:13:15.479 --rc genhtml_function_coverage=1 00:13:15.479 --rc genhtml_legend=1 00:13:15.479 --rc geninfo_all_blocks=1 00:13:15.479 --rc geninfo_unexecuted_blocks=1 00:13:15.479 00:13:15.479 ' 00:13:15.479 13:44:46 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.479 --rc genhtml_branch_coverage=1 00:13:15.479 --rc genhtml_function_coverage=1 00:13:15.479 --rc genhtml_legend=1 00:13:15.479 --rc geninfo_all_blocks=1 00:13:15.479 --rc geninfo_unexecuted_blocks=1 00:13:15.479 00:13:15.479 ' 00:13:15.479 13:44:46 version -- app/version.sh@17 -- # get_header_version major 00:13:15.479 13:44:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # cut -f2 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.479 13:44:46 version -- app/version.sh@17 -- # major=25 00:13:15.479 13:44:46 version -- app/version.sh@18 -- # get_header_version minor 00:13:15.479 13:44:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # cut -f2 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.479 13:44:46 version -- app/version.sh@18 -- # minor=1 00:13:15.479 13:44:46 version -- app/version.sh@19 -- # get_header_version patch 00:13:15.479 13:44:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # cut -f2 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.479 13:44:46 version -- app/version.sh@19 -- # patch=0 00:13:15.479 13:44:46 version -- app/version.sh@20 -- # get_header_version suffix 00:13:15.479 13:44:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # cut -f2 00:13:15.479 13:44:46 version -- app/version.sh@14 -- # tr -d '"' 00:13:15.479 13:44:46 version -- app/version.sh@20 -- # suffix=-pre 00:13:15.479 13:44:46 version -- app/version.sh@22 -- # version=25.1 00:13:15.479 13:44:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:15.479 13:44:46 version -- app/version.sh@28 -- # version=25.1rc0 00:13:15.479 13:44:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python 00:13:15.479 13:44:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:15.479 13:44:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:13:15.479 13:44:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:13:15.479 00:13:15.479 real 0m0.254s 00:13:15.479 user 0m0.141s 00:13:15.479 sys 0m0.151s 00:13:15.479 13:44:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.479 13:44:46 version -- common/autotest_common.sh@10 -- # set +x 00:13:15.479 ************************************ 00:13:15.479 END TEST version 00:13:15.479 ************************************ 00:13:15.479 13:44:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:13:15.479 13:44:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:13:15.479 13:44:46 -- spdk/autotest.sh@194 -- # uname -s 00:13:15.479 13:44:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:15.479 13:44:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:15.479 13:44:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:15.479 13:44:46 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:13:15.479 13:44:46 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme 00:13:15.479 13:44:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.479 13:44:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.479 13:44:46 -- common/autotest_common.sh@10 -- # set +x 00:13:15.739 ************************************ 00:13:15.739 START TEST blockdev_nvme 00:13:15.739 ************************************ 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme 00:13:15.739 * Looking for test storage... 00:13:15.739 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.739 13:44:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.739 --rc genhtml_branch_coverage=1 00:13:15.739 --rc genhtml_function_coverage=1 00:13:15.739 --rc genhtml_legend=1 00:13:15.739 --rc geninfo_all_blocks=1 00:13:15.739 --rc geninfo_unexecuted_blocks=1 00:13:15.739 00:13:15.739 ' 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.739 --rc genhtml_branch_coverage=1 00:13:15.739 --rc genhtml_function_coverage=1 00:13:15.739 --rc genhtml_legend=1 00:13:15.739 --rc geninfo_all_blocks=1 00:13:15.739 --rc geninfo_unexecuted_blocks=1 00:13:15.739 00:13:15.739 ' 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.739 --rc genhtml_branch_coverage=1 00:13:15.739 --rc genhtml_function_coverage=1 00:13:15.739 --rc genhtml_legend=1 00:13:15.739 --rc geninfo_all_blocks=1 00:13:15.739 --rc geninfo_unexecuted_blocks=1 00:13:15.739 00:13:15.739 ' 00:13:15.739 13:44:47 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.739 --rc genhtml_branch_coverage=1 00:13:15.739 --rc genhtml_function_coverage=1 00:13:15.739 --rc genhtml_legend=1 00:13:15.739 --rc geninfo_all_blocks=1 00:13:15.739 --rc geninfo_unexecuted_blocks=1 00:13:15.739 00:13:15.739 ' 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:13:15.740 13:44:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3857270 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:15.740 13:44:47 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 3857270 00:13:15.740 13:44:47 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 3857270 ']' 00:13:15.740 13:44:47 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.740 13:44:47 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.740 13:44:47 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.740 13:44:47 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.740 13:44:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.740 [2024-12-05 13:44:47.226410] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:15.740 [2024-12-05 13:44:47.226468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857270 ] 00:13:15.999 [2024-12-05 13:44:47.332929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.999 [2024-12-05 13:44:47.389002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.259 [2024-12-05 13:44:47.611341] 'OCF_Core' volume operations registered 00:13:16.259 [2024-12-05 13:44:47.611376] 'OCF_Cache' volume operations registered 00:13:16.259 [2024-12-05 13:44:47.615801] 'OCF Composite' volume operations registered 00:13:16.259 [2024-12-05 13:44:47.620264] 'SPDK_block_device' volume operations registered 00:13:16.518 13:44:47 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.518 13:44:47 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:13:16.518 13:44:47 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:13:16.518 13:44:47 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:13:16.518 13:44:47 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:13:16.518 13:44:47 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:16.518 13:44:47 blockdev_nvme -- bdev/blockdev.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:13:16.518 13:44:47 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:d8:00.0" } } ] }'\''' 00:13:16.518 13:44:47 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.518 13:44:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:13:19.807 13:44:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:13:19.807 13:44:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:13:19.808 13:44:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "09b9d4d1-7ed1-435b-9dba-eaa70839c28f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 512,' ' "num_blocks": 7814037168,' ' "uuid": "09b9d4d1-7ed1-435b-9dba-eaa70839c28f",' ' "numa_id": 1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:d8:00.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:d8:00.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x8086",' ' "model_number": "INTEL SSDPE2KX040T8",' ' "serial_number": "BTLJ8234018V4P0DGN",' ' "firmware_revision": "VDV1Y295",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 1,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.2"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:19.808 13:44:50 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:13:19.808 13:44:50 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:13:19.808 13:44:50 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:13:19.808 13:44:50 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 3857270 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 3857270 ']' 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 3857270 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3857270 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3857270' 00:13:19.808 killing process with pid 3857270 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 3857270 00:13:19.808 13:44:50 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 3857270 00:13:24.105 13:44:55 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:24.105 13:44:55 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:24.105 13:44:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:24.105 13:44:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.105 13:44:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.105 ************************************ 00:13:24.105 START TEST bdev_hello_world 00:13:24.105 ************************************ 00:13:24.105 13:44:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:24.105 [2024-12-05 13:44:55.215852] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:24.105 [2024-12-05 13:44:55.215890] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858356 ] 00:13:24.105 [2024-12-05 13:44:55.322952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.105 [2024-12-05 13:44:55.378148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.105 [2024-12-05 13:44:55.625293] 'OCF_Core' volume operations registered 00:13:24.105 [2024-12-05 13:44:55.625325] 'OCF_Cache' volume operations registered 00:13:24.363 [2024-12-05 13:44:55.629427] 'OCF Composite' volume operations registered 00:13:24.363 [2024-12-05 13:44:55.633524] 'SPDK_block_device' volume operations registered 00:13:27.649 [2024-12-05 13:44:58.510368] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:27.649 [2024-12-05 13:44:58.510404] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:27.649 [2024-12-05 13:44:58.510423] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:27.649 [2024-12-05 13:44:58.513680] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:27.649 [2024-12-05 13:44:58.513850] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:27.649 [2024-12-05 13:44:58.513868] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:27.649 [2024-12-05 13:44:58.514594] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:27.649 00:13:27.649 [2024-12-05 13:44:58.514612] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:31.832 00:13:31.832 real 0m7.327s 00:13:31.832 user 0m5.966s 00:13:31.832 sys 0m0.606s 00:13:31.832 13:45:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.832 13:45:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:31.832 ************************************ 00:13:31.832 END TEST bdev_hello_world 00:13:31.832 ************************************ 00:13:31.832 13:45:02 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:13:31.832 13:45:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.832 13:45:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.832 13:45:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.832 ************************************ 00:13:31.832 START TEST bdev_bounds 00:13:31.832 ************************************ 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3859392 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3859392' 00:13:31.832 Process bdevio pid: 3859392 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3859392 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 3859392 ']' 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.832 13:45:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:31.832 [2024-12-05 13:45:02.612092] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:31.832 [2024-12-05 13:45:02.612140] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859392 ] 00:13:31.832 [2024-12-05 13:45:02.719008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.832 [2024-12-05 13:45:02.781330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.832 [2024-12-05 13:45:02.781406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.832 [2024-12-05 13:45:02.781410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.832 [2024-12-05 13:45:03.031010] 'OCF_Core' volume operations registered 00:13:31.832 [2024-12-05 13:45:03.031048] 'OCF_Cache' volume operations registered 00:13:31.832 [2024-12-05 13:45:03.035481] 'OCF Composite' volume operations registered 00:13:31.832 [2024-12-05 13:45:03.039921] 'SPDK_block_device' volume operations registered 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:35.130 I/O targets: 00:13:35.130 Nvme0n1: 7814037168 blocks of 512 bytes (3815448 MiB) 00:13:35.130 00:13:35.130 00:13:35.130 CUnit - A unit testing framework for C - Version 2.1-3 00:13:35.130 http://cunit.sourceforge.net/ 00:13:35.130 00:13:35.130 00:13:35.130 Suite: bdevio tests on: Nvme0n1 00:13:35.130 Test: blockdev write read block ...passed 00:13:35.130 Test: blockdev write zeroes read block ...passed 00:13:35.130 Test: blockdev write zeroes read no split ...passed 00:13:35.130 Test: blockdev write zeroes read split ...passed 00:13:35.130 Test: blockdev write zeroes read split partial ...passed 00:13:35.130 Test: blockdev reset ...[2024-12-05 13:45:06.507130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:13:35.130 [2024-12-05 13:45:06.509596] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:d8:00.0, 0] Resetting controller successful. 00:13:35.130 passed 00:13:35.130 Test: blockdev write read 8 blocks ...passed 00:13:35.130 Test: blockdev write read size > 128k ...passed 00:13:35.130 Test: blockdev write read invalid size ...passed 00:13:35.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:35.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:35.130 Test: blockdev write read max offset ...passed 00:13:35.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:35.130 Test: blockdev writev readv 8 blocks ...passed 00:13:35.130 Test: blockdev writev readv 30 x 1block ...passed 00:13:35.130 Test: blockdev writev readv block ...passed 00:13:35.130 Test: blockdev writev readv size > 128k ...passed 00:13:35.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:35.130 Test: blockdev comparev and writev ...passed 00:13:35.130 Test: blockdev nvme passthru rw ...passed 00:13:35.130 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:45:06.525372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:895 PRP1 0x0 PRP2 0x0 00:13:35.130 [2024-12-05 13:45:06.525400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:895 cdw0:0 sqhd:0056 p:1 m:0 dnr:1 00:13:35.130 passed 00:13:35.130 Test: blockdev nvme admin passthru ...passed 00:13:35.130 Test: blockdev copy ...passed 00:13:35.130 00:13:35.130 Run Summary: Type Total Ran Passed Failed Inactive 00:13:35.130 suites 1 1 n/a 0 0 00:13:35.130 tests 23 23 23 0 0 00:13:35.130 asserts 140 140 140 0 n/a 00:13:35.130 00:13:35.130 Elapsed time = 0.106 seconds 00:13:35.130 0 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3859392 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 3859392 ']' 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 3859392 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859392 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859392' 00:13:35.130 killing process with pid 3859392 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 3859392 00:13:35.130 13:45:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 3859392 00:13:39.337 13:45:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:39.337 00:13:39.337 real 0m8.050s 00:13:39.337 user 0m22.914s 00:13:39.337 sys 0m0.817s 00:13:39.337 13:45:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.337 13:45:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:39.337 ************************************ 00:13:39.337 END TEST bdev_bounds 00:13:39.337 ************************************ 00:13:39.337 13:45:10 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 '' 00:13:39.337 13:45:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:39.337 13:45:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.337 13:45:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.337 ************************************ 00:13:39.337 START TEST bdev_nbd 00:13:39.337 ************************************ 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 '' 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3860863 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3860863 /var/tmp/spdk-nbd.sock 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 3860863 ']' 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:39.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.337 13:45:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:39.337 [2024-12-05 13:45:10.787230] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:39.338 [2024-12-05 13:45:10.787299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.595 [2024-12-05 13:45:10.909360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.595 [2024-12-05 13:45:10.965124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.853 [2024-12-05 13:45:11.213525] 'OCF_Core' volume operations registered 00:13:39.853 [2024-12-05 13:45:11.213566] 'OCF_Cache' volume operations registered 00:13:39.853 [2024-12-05 13:45:11.218018] 'OCF Composite' volume operations registered 00:13:39.853 [2024-12-05 13:45:11.222497] 'SPDK_block_device' volume operations registered 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.135 1+0 records in 00:13:43.135 1+0 records out 00:13:43.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284035 s, 14.4 MB/s 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:43.135 { 00:13:43.135 "nbd_device": "/dev/nbd0", 00:13:43.135 "bdev_name": "Nvme0n1" 00:13:43.135 } 00:13:43.135 ]' 00:13:43.135 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:43.393 { 00:13:43.393 "nbd_device": "/dev/nbd0", 00:13:43.393 "bdev_name": "Nvme0n1" 00:13:43.393 } 00:13:43.393 ]' 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:43.393 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.650 13:45:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.907 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:44.165 /dev/nbd0 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.165 1+0 records in 00:13:44.165 1+0 records out 00:13:44.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023206 s, 17.7 MB/s 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.165 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:44.422 { 00:13:44.422 "nbd_device": "/dev/nbd0", 00:13:44.422 "bdev_name": "Nvme0n1" 00:13:44.422 } 00:13:44.422 ]' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:44.422 { 00:13:44.422 "nbd_device": "/dev/nbd0", 00:13:44.422 "bdev_name": "Nvme0n1" 00:13:44.422 } 00:13:44.422 ]' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:44.422 256+0 records in 00:13:44.422 256+0 records out 00:13:44.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105392 s, 99.5 MB/s 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:44.422 256+0 records in 00:13:44.422 256+0 records out 00:13:44.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227969 s, 46.0 MB/s 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.422 13:45:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:44.680 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.680 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.680 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.680 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.680 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.680 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:44.937 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:45.195 malloc_lvol_verify 00:13:45.453 13:45:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:45.710 1385a578-2e2c-456d-bf6a-3b33c189e930 00:13:45.711 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:45.967 0b295b91-ea61-44ce-bcc6-33222aa7a68b 00:13:45.967 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:46.226 /dev/nbd0 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:46.226 mke2fs 1.47.0 (5-Feb-2023) 00:13:46.226 Discarding device blocks: 0/4096 done 00:13:46.226 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:46.226 00:13:46.226 Allocating group tables: 0/1 done 00:13:46.226 Writing inode tables: 0/1 done 00:13:46.226 Creating journal (1024 blocks): done 00:13:46.226 Writing superblocks and filesystem accounting information: 0/1 done 00:13:46.226 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.226 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3860863 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 3860863 ']' 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 3860863 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3860863 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3860863' 00:13:46.484 killing process with pid 3860863 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 3860863 00:13:46.484 13:45:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 3860863 00:13:50.668 13:45:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:50.668 00:13:50.668 real 0m11.240s 00:13:50.668 user 0m12.654s 00:13:50.668 sys 0m2.102s 00:13:50.668 13:45:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.668 13:45:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:50.668 ************************************ 00:13:50.668 END TEST bdev_nbd 00:13:50.668 ************************************ 00:13:50.668 13:45:22 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:50.668 13:45:22 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:13:50.668 13:45:22 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:50.668 skipping fio tests on NVMe due to multi-ns failures. 00:13:50.668 13:45:22 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:50.668 13:45:22 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:50.668 13:45:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:50.668 13:45:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.668 13:45:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.668 ************************************ 00:13:50.668 START TEST bdev_verify 00:13:50.668 ************************************ 00:13:50.668 13:45:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:50.668 [2024-12-05 13:45:22.103989] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:13:50.668 [2024-12-05 13:45:22.104056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862527 ] 00:13:50.925 [2024-12-05 13:45:22.225416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.925 [2024-12-05 13:45:22.280391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.925 [2024-12-05 13:45:22.280396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.182 [2024-12-05 13:45:22.520325] 'OCF_Core' volume operations registered 00:13:51.182 [2024-12-05 13:45:22.520373] 'OCF_Cache' volume operations registered 00:13:51.182 [2024-12-05 13:45:22.524669] 'OCF Composite' volume operations registered 00:13:51.182 [2024-12-05 13:45:22.528996] 'SPDK_block_device' volume operations registered 00:13:54.464 Running I/O for 5 seconds... 00:13:55.964 23265.00 IOPS, 90.88 MiB/s [2024-12-05T12:45:28.863Z] 23448.50 IOPS, 91.60 MiB/s [2024-12-05T12:45:29.798Z] 23510.33 IOPS, 91.84 MiB/s [2024-12-05T12:45:30.730Z] 23465.00 IOPS, 91.66 MiB/s 00:13:59.204 Latency(us) 00:13:59.204 [2024-12-05T12:45:30.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.204 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.204 Verification LBA range: start 0x0 length 0x1d1c0beb 00:13:59.204 Nvme0n1 : 5.00 9958.36 38.90 0.00 0.00 12782.61 50.09 13563.10 00:13:59.204 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.204 Verification LBA range: start 0x1d1c0beb length 0x1d1c0beb 00:13:59.204 Nvme0n1 : 5.01 13425.76 52.44 0.00 0.00 9486.83 17.70 12366.36 00:13:59.204 [2024-12-05T12:45:30.730Z] =================================================================================================================== 00:13:59.204 [2024-12-05T12:45:30.730Z] Total : 23384.12 91.34 0.00 0.00 10889.86 17.70 13563.10 00:14:03.441 00:14:03.441 real 0m12.511s 00:14:03.441 user 0m23.228s 00:14:03.441 sys 0m0.663s 00:14:03.441 13:45:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.441 13:45:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:03.441 ************************************ 00:14:03.441 END TEST bdev_verify 00:14:03.441 ************************************ 00:14:03.441 13:45:34 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:03.441 13:45:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:03.441 13:45:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.441 13:45:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.441 ************************************ 00:14:03.441 START TEST bdev_verify_big_io 00:14:03.441 ************************************ 00:14:03.441 13:45:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:03.441 [2024-12-05 13:45:34.708409] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:03.441 [2024-12-05 13:45:34.708478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864136 ] 00:14:03.441 [2024-12-05 13:45:34.833714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.441 [2024-12-05 13:45:34.892857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.441 [2024-12-05 13:45:34.892862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.699 [2024-12-05 13:45:35.145671] 'OCF_Core' volume operations registered 00:14:03.699 [2024-12-05 13:45:35.145707] 'OCF_Cache' volume operations registered 00:14:03.699 [2024-12-05 13:45:35.149927] 'OCF Composite' volume operations registered 00:14:03.699 [2024-12-05 13:45:35.154119] 'SPDK_block_device' volume operations registered 00:14:06.974 Running I/O for 5 seconds... 00:14:08.833 1391.00 IOPS, 86.94 MiB/s [2024-12-05T12:45:41.729Z] 1504.00 IOPS, 94.00 MiB/s [2024-12-05T12:45:42.663Z] 1541.67 IOPS, 96.35 MiB/s [2024-12-05T12:45:43.231Z] 1566.75 IOPS, 97.92 MiB/s 00:14:11.705 Latency(us) 00:14:11.705 [2024-12-05T12:45:43.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.705 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:11.705 Verification LBA range: start 0x0 length 0x1d1c0be 00:14:11.705 Nvme0n1 : 5.05 617.87 38.62 0.00 0.00 200678.71 2279.51 210627.01 00:14:11.705 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:11.705 Verification LBA range: start 0x1d1c0be length 0x1d1c0be 00:14:11.705 Nvme0n1 : 5.04 891.88 55.74 0.00 0.00 140040.32 776.46 155006.89 00:14:11.705 [2024-12-05T12:45:43.231Z] =================================================================================================================== 00:14:11.705 [2024-12-05T12:45:43.231Z] Total : 1509.75 94.36 0.00 0.00 164883.11 776.46 210627.01 00:14:15.885 00:14:15.885 real 0m12.457s 00:14:15.885 user 0m23.105s 00:14:15.885 sys 0m0.675s 00:14:15.885 13:45:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.885 13:45:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.885 ************************************ 00:14:15.885 END TEST bdev_verify_big_io 00:14:15.885 ************************************ 00:14:15.885 13:45:47 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:15.885 13:45:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:15.885 13:45:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.885 13:45:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.885 ************************************ 00:14:15.885 START TEST bdev_write_zeroes 00:14:15.885 ************************************ 00:14:15.885 13:45:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:15.885 [2024-12-05 13:45:47.240646] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:15.885 [2024-12-05 13:45:47.240708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865737 ] 00:14:15.885 [2024-12-05 13:45:47.362682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.143 [2024-12-05 13:45:47.419053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.400 [2024-12-05 13:45:47.675996] 'OCF_Core' volume operations registered 00:14:16.400 [2024-12-05 13:45:47.676032] 'OCF_Cache' volume operations registered 00:14:16.400 [2024-12-05 13:45:47.680558] 'OCF Composite' volume operations registered 00:14:16.400 [2024-12-05 13:45:47.685024] 'SPDK_block_device' volume operations registered 00:14:19.677 Running I/O for 1 seconds... 00:14:20.241 59520.00 IOPS, 232.50 MiB/s 00:14:20.241 Latency(us) 00:14:20.241 [2024-12-05T12:45:51.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.241 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:20.241 Nvme0n1 : 1.01 59396.93 232.02 0.00 0.00 2147.76 776.46 4530.53 00:14:20.241 [2024-12-05T12:45:51.767Z] =================================================================================================================== 00:14:20.241 [2024-12-05T12:45:51.767Z] Total : 59396.93 232.02 0.00 0.00 2147.76 776.46 4530.53 00:14:24.569 00:14:24.569 real 0m8.400s 00:14:24.569 user 0m6.988s 00:14:24.569 sys 0m0.646s 00:14:24.569 13:45:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.569 13:45:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:24.569 ************************************ 00:14:24.569 END TEST bdev_write_zeroes 00:14:24.569 ************************************ 00:14:24.569 13:45:55 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.569 13:45:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:24.569 13:45:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.569 13:45:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.569 ************************************ 00:14:24.569 START TEST bdev_json_nonenclosed 00:14:24.569 ************************************ 00:14:24.569 13:45:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.569 [2024-12-05 13:45:55.718881] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:24.569 [2024-12-05 13:45:55.718940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866821 ] 00:14:24.569 [2024-12-05 13:45:55.839744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.569 [2024-12-05 13:45:55.895060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.569 [2024-12-05 13:45:55.895139] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:24.569 [2024-12-05 13:45:55.895158] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:24.569 [2024-12-05 13:45:55.895171] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:24.569 00:14:24.569 real 0m0.295s 00:14:24.569 user 0m0.169s 00:14:24.569 sys 0m0.123s 00:14:24.570 13:45:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.570 13:45:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:24.570 ************************************ 00:14:24.570 END TEST bdev_json_nonenclosed 00:14:24.570 ************************************ 00:14:24.570 13:45:55 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.570 13:45:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:24.570 13:45:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.570 13:45:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.570 ************************************ 00:14:24.570 START TEST bdev_json_nonarray 00:14:24.570 ************************************ 00:14:24.570 13:45:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:24.570 [2024-12-05 13:45:56.084158] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:24.570 [2024-12-05 13:45:56.084219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866998 ] 00:14:24.835 [2024-12-05 13:45:56.202849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.835 [2024-12-05 13:45:56.256123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.835 [2024-12-05 13:45:56.256203] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:24.835 [2024-12-05 13:45:56.256221] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:24.835 [2024-12-05 13:45:56.256234] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:24.835 00:14:24.835 real 0m0.286s 00:14:24.835 user 0m0.168s 00:14:24.835 sys 0m0.116s 00:14:24.835 13:45:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.835 13:45:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:24.835 ************************************ 00:14:24.835 END TEST bdev_json_nonarray 00:14:24.835 ************************************ 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:14:25.093 13:45:56 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:14:25.093 00:14:25.094 real 1m9.368s 00:14:25.094 user 1m42.373s 00:14:25.094 sys 0m7.256s 00:14:25.094 13:45:56 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.094 13:45:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.094 ************************************ 00:14:25.094 END TEST blockdev_nvme 00:14:25.094 ************************************ 00:14:25.094 13:45:56 -- spdk/autotest.sh@209 -- # uname -s 00:14:25.094 13:45:56 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:14:25.094 13:45:56 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt 00:14:25.094 13:45:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.094 13:45:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.094 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:14:25.094 ************************************ 00:14:25.094 START TEST blockdev_nvme_gpt 00:14:25.094 ************************************ 00:14:25.094 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt 00:14:25.094 * Looking for test storage... 00:14:25.094 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev 00:14:25.094 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:25.094 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:14:25.094 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.353 13:45:56 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:25.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.353 --rc genhtml_branch_coverage=1 00:14:25.353 --rc genhtml_function_coverage=1 00:14:25.353 --rc genhtml_legend=1 00:14:25.353 --rc geninfo_all_blocks=1 00:14:25.353 --rc geninfo_unexecuted_blocks=1 00:14:25.353 00:14:25.353 ' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:25.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.353 --rc genhtml_branch_coverage=1 00:14:25.353 --rc genhtml_function_coverage=1 00:14:25.353 --rc genhtml_legend=1 00:14:25.353 --rc geninfo_all_blocks=1 00:14:25.353 --rc geninfo_unexecuted_blocks=1 00:14:25.353 00:14:25.353 ' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:25.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.353 --rc genhtml_branch_coverage=1 00:14:25.353 --rc genhtml_function_coverage=1 00:14:25.353 --rc genhtml_legend=1 00:14:25.353 --rc geninfo_all_blocks=1 00:14:25.353 --rc geninfo_unexecuted_blocks=1 00:14:25.353 00:14:25.353 ' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:25.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.353 --rc genhtml_branch_coverage=1 00:14:25.353 --rc genhtml_function_coverage=1 00:14:25.353 --rc genhtml_legend=1 00:14:25.353 --rc geninfo_all_blocks=1 00:14:25.353 --rc geninfo_unexecuted_blocks=1 00:14:25.353 00:14:25.353 ' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3867081 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 3867081 00:14:25.353 13:45:56 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 3867081 ']' 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.353 13:45:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:25.353 [2024-12-05 13:45:56.757237] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:25.354 [2024-12-05 13:45:56.757312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867081 ] 00:14:25.354 [2024-12-05 13:45:56.869914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.612 [2024-12-05 13:45:56.929760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.869 [2024-12-05 13:45:57.145185] 'OCF_Core' volume operations registered 00:14:25.869 [2024-12-05 13:45:57.145221] 'OCF_Cache' volume operations registered 00:14:25.869 [2024-12-05 13:45:57.149650] 'OCF Composite' volume operations registered 00:14:25.869 [2024-12-05 13:45:57.154109] 'SPDK_block_device' volume operations registered 00:14:26.435 13:45:57 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.435 13:45:57 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:14:26.435 13:45:57 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:26.435 13:45:57 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:14:26.435 13:45:57 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:14:29.715 Waiting for block devices as requested 00:14:29.715 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:14:29.715 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:14:29.715 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:14:29.972 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:14:29.972 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:14:29.972 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:14:30.230 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:14:30.230 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:14:30.230 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:14:30.487 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:14:30.487 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:14:30.487 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:14:30.745 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:14:30.745 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:14:30.745 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:14:31.020 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:14:31.020 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:14:31.951 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:31.951 13:46:03 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:31.951 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:14:31.951 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:14:31.951 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:14:31.952 BYT; 00:14:31.952 /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:;' 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:14:31.952 BYT; 00:14:31.952 /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]] 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]] 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:31.952 13:46:03 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:14:31.952 13:46:03 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:14:33.324 The operation has completed successfully. 00:14:33.324 13:46:04 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:14:34.251 The operation has completed successfully. 00:14:34.251 13:46:05 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:14:37.523 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:14:37.523 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:14:37.524 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:14:40.805 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:14:42.179 13:46:13 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:14:42.179 13:46:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.179 13:46:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:42.179 [] 00:14:42.179 13:46:13 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.179 13:46:13 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:14:42.180 13:46:13 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:14:42.180 13:46:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:14:42.180 13:46:13 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:14:42.180 13:46:13 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:d8:00.0" } } ] }'\''' 00:14:42.180 13:46:13 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.180 13:46:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 512,' ' "num_blocks": 3907016704,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 2048,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 512,' ' "num_blocks": 3907016703,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 3907018752,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1p1 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:45.460 13:46:16 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 3867081 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 3867081 ']' 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 3867081 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3867081 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3867081' 00:14:45.460 killing process with pid 3867081 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 3867081 00:14:45.460 13:46:16 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 3867081 00:14:49.641 13:46:20 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:49.641 13:46:20 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:14:49.641 13:46:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:49.641 13:46:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.641 13:46:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:49.641 ************************************ 00:14:49.641 START TEST bdev_hello_world 00:14:49.641 ************************************ 00:14:49.641 13:46:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:14:49.641 [2024-12-05 13:46:20.779510] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:49.641 [2024-12-05 13:46:20.779572] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871583 ] 00:14:49.641 [2024-12-05 13:46:20.900198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.641 [2024-12-05 13:46:20.957069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.899 [2024-12-05 13:46:21.190250] 'OCF_Core' volume operations registered 00:14:49.899 [2024-12-05 13:46:21.190281] 'OCF_Cache' volume operations registered 00:14:49.899 [2024-12-05 13:46:21.194297] 'OCF Composite' volume operations registered 00:14:49.899 [2024-12-05 13:46:21.198347] 'SPDK_block_device' volume operations registered 00:14:53.180 [2024-12-05 13:46:24.078197] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:53.180 [2024-12-05 13:46:24.078232] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:14:53.180 [2024-12-05 13:46:24.078249] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:53.180 [2024-12-05 13:46:24.081189] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:53.180 [2024-12-05 13:46:24.081373] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:53.180 [2024-12-05 13:46:24.081391] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:53.180 [2024-12-05 13:46:24.084611] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:53.180 00:14:53.180 [2024-12-05 13:46:24.084638] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:57.363 00:14:57.363 real 0m7.325s 00:14:57.363 user 0m5.977s 00:14:57.363 sys 0m0.587s 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:57.363 ************************************ 00:14:57.363 END TEST bdev_hello_world 00:14:57.363 ************************************ 00:14:57.363 13:46:28 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:57.363 13:46:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:57.363 13:46:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.363 13:46:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:57.363 ************************************ 00:14:57.363 START TEST bdev_bounds 00:14:57.363 ************************************ 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3872491 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3872491' 00:14:57.363 Process bdevio pid: 3872491 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3872491 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 3872491 ']' 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:57.363 13:46:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:14:57.363 [2024-12-05 13:46:28.167551] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:14:57.363 [2024-12-05 13:46:28.167626] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872491 ] 00:14:57.363 [2024-12-05 13:46:28.290403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.363 [2024-12-05 13:46:28.349755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.363 [2024-12-05 13:46:28.349843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.363 [2024-12-05 13:46:28.349848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.363 [2024-12-05 13:46:28.609254] 'OCF_Core' volume operations registered 00:14:57.363 [2024-12-05 13:46:28.609291] 'OCF_Cache' volume operations registered 00:14:57.363 [2024-12-05 13:46:28.613737] 'OCF Composite' volume operations registered 00:14:57.363 [2024-12-05 13:46:28.618189] 'SPDK_block_device' volume operations registered 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:00.654 I/O targets: 00:15:00.654 Nvme0n1p1: 3907016704 blocks of 512 bytes (1907723 MiB) 00:15:00.654 Nvme0n1p2: 3907016703 blocks of 512 bytes (1907723 MiB) 00:15:00.654 00:15:00.654 00:15:00.654 CUnit - A unit testing framework for C - Version 2.1-3 00:15:00.654 http://cunit.sourceforge.net/ 00:15:00.654 00:15:00.654 00:15:00.654 Suite: bdevio tests on: Nvme0n1p2 00:15:00.654 Test: blockdev write read block ...passed 00:15:00.654 Test: blockdev write zeroes read block ...passed 00:15:00.654 Test: blockdev write zeroes read no split ...passed 00:15:00.654 Test: blockdev write zeroes read split ...passed 00:15:00.654 Test: blockdev write zeroes read split partial ...passed 00:15:00.654 Test: blockdev reset ...[2024-12-05 13:46:31.682929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:15:00.654 [2024-12-05 13:46:31.685489] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:d8:00.0, 0] Resetting controller successful. 00:15:00.654 passed 00:15:00.654 Test: blockdev write read 8 blocks ...passed 00:15:00.654 Test: blockdev write read size > 128k ...passed 00:15:00.654 Test: blockdev write read invalid size ...passed 00:15:00.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:00.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:00.654 Test: blockdev write read max offset ...passed 00:15:00.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:00.654 Test: blockdev writev readv 8 blocks ...passed 00:15:00.654 Test: blockdev writev readv 30 x 1block ...passed 00:15:00.654 Test: blockdev writev readv block ...passed 00:15:00.654 Test: blockdev writev readv size > 128k ...passed 00:15:00.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:00.654 Test: blockdev comparev and writev ...passed 00:15:00.654 Test: blockdev nvme passthru rw ...passed 00:15:00.654 Test: blockdev nvme passthru vendor specific ...passed 00:15:00.654 Test: blockdev nvme admin passthru ...passed 00:15:00.654 Test: blockdev copy ...passed 00:15:00.654 Suite: bdevio tests on: Nvme0n1p1 00:15:00.654 Test: blockdev write read block ...passed 00:15:00.654 Test: blockdev write zeroes read block ...passed 00:15:00.654 Test: blockdev write zeroes read no split ...passed 00:15:00.654 Test: blockdev write zeroes read split ...passed 00:15:00.654 Test: blockdev write zeroes read split partial ...passed 00:15:00.654 Test: blockdev reset ...[2024-12-05 13:46:31.756359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:15:00.654 [2024-12-05 13:46:31.758665] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:d8:00.0, 0] Resetting controller successful. 00:15:00.654 passed 00:15:00.654 Test: blockdev write read 8 blocks ...passed 00:15:00.654 Test: blockdev write read size > 128k ...passed 00:15:00.654 Test: blockdev write read invalid size ...passed 00:15:00.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:00.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:00.654 Test: blockdev write read max offset ...passed 00:15:00.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:00.654 Test: blockdev writev readv 8 blocks ...passed 00:15:00.654 Test: blockdev writev readv 30 x 1block ...passed 00:15:00.654 Test: blockdev writev readv block ...passed 00:15:00.654 Test: blockdev writev readv size > 128k ...passed 00:15:00.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:00.654 Test: blockdev comparev and writev ...passed 00:15:00.654 Test: blockdev nvme passthru rw ...passed 00:15:00.654 Test: blockdev nvme passthru vendor specific ...passed 00:15:00.654 Test: blockdev nvme admin passthru ...passed 00:15:00.654 Test: blockdev copy ...passed 00:15:00.654 00:15:00.654 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.654 suites 2 2 n/a 0 0 00:15:00.654 tests 46 46 46 0 0 00:15:00.654 asserts 260 260 260 0 n/a 00:15:00.654 00:15:00.654 Elapsed time = 0.303 seconds 00:15:00.654 0 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3872491 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 3872491 ']' 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 3872491 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3872491 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3872491' 00:15:00.654 killing process with pid 3872491 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 3872491 00:15:00.654 13:46:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 3872491 00:15:04.847 13:46:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:04.847 00:15:04.848 real 0m7.753s 00:15:04.848 user 0m21.796s 00:15:04.848 sys 0m0.824s 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:04.848 ************************************ 00:15:04.848 END TEST bdev_bounds 00:15:04.848 ************************************ 00:15:04.848 13:46:35 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:15:04.848 13:46:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:04.848 13:46:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.848 13:46:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.848 ************************************ 00:15:04.848 START TEST bdev_nbd 00:15:04.848 ************************************ 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3873570 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3873570 /var/tmp/spdk-nbd.sock 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 3873570 ']' 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:04.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:04.848 13:46:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:15:04.848 [2024-12-05 13:46:35.998549] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:04.848 [2024-12-05 13:46:35.998616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.848 [2024-12-05 13:46:36.110710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.848 [2024-12-05 13:46:36.167876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.106 [2024-12-05 13:46:36.418329] 'OCF_Core' volume operations registered 00:15:05.106 [2024-12-05 13:46:36.418363] 'OCF_Cache' volume operations registered 00:15:05.106 [2024-12-05 13:46:36.422780] 'OCF Composite' volume operations registered 00:15:05.106 [2024-12-05 13:46:36.427231] 'SPDK_block_device' volume operations registered 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.405 1+0 records in 00:15:08.405 1+0 records out 00:15:08.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267203 s, 15.3 MB/s 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:15:08.405 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.664 1+0 records in 00:15:08.664 1+0 records out 00:15:08.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320972 s, 12.8 MB/s 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:15:08.664 13:46:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:08.923 { 00:15:08.923 "nbd_device": "/dev/nbd0", 00:15:08.923 "bdev_name": "Nvme0n1p1" 00:15:08.923 }, 00:15:08.923 { 00:15:08.923 "nbd_device": "/dev/nbd1", 00:15:08.923 "bdev_name": "Nvme0n1p2" 00:15:08.923 } 00:15:08.923 ]' 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:08.923 { 00:15:08.923 "nbd_device": "/dev/nbd0", 00:15:08.923 "bdev_name": "Nvme0n1p1" 00:15:08.923 }, 00:15:08.923 { 00:15:08.923 "nbd_device": "/dev/nbd1", 00:15:08.923 "bdev_name": "Nvme0n1p2" 00:15:08.923 } 00:15:08.923 ]' 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.923 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.182 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.441 13:46:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:09.700 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.701 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.701 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.701 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:09.701 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.701 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:09.701 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:15:09.960 /dev/nbd0 00:15:09.960 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.960 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.219 1+0 records in 00:15:10.219 1+0 records out 00:15:10.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276355 s, 14.8 MB/s 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:10.219 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:15:10.479 /dev/nbd1 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.479 1+0 records in 00:15:10.479 1+0 records out 00:15:10.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283713 s, 14.4 MB/s 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.479 13:46:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:10.738 { 00:15:10.738 "nbd_device": "/dev/nbd0", 00:15:10.738 "bdev_name": "Nvme0n1p1" 00:15:10.738 }, 00:15:10.738 { 00:15:10.738 "nbd_device": "/dev/nbd1", 00:15:10.738 "bdev_name": "Nvme0n1p2" 00:15:10.738 } 00:15:10.738 ]' 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:10.738 { 00:15:10.738 "nbd_device": "/dev/nbd0", 00:15:10.738 "bdev_name": "Nvme0n1p1" 00:15:10.738 }, 00:15:10.738 { 00:15:10.738 "nbd_device": "/dev/nbd1", 00:15:10.738 "bdev_name": "Nvme0n1p2" 00:15:10.738 } 00:15:10.738 ]' 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:10.738 /dev/nbd1' 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:10.738 /dev/nbd1' 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:10.738 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:10.739 256+0 records in 00:15:10.739 256+0 records out 00:15:10.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117609 s, 89.2 MB/s 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:10.739 256+0 records in 00:15:10.739 256+0 records out 00:15:10.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0345782 s, 30.3 MB/s 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:10.739 256+0 records in 00:15:10.739 256+0 records out 00:15:10.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0441846 s, 23.7 MB/s 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:10.739 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.001 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.260 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:11.520 13:46:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:11.779 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:12.037 malloc_lvol_verify 00:15:12.037 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:12.297 b02712d2-6ff5-4364-a974-e038a268728a 00:15:12.297 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:12.556 4bd56511-fa21-4c62-8ace-843c7f81612a 00:15:12.556 13:46:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:12.814 /dev/nbd0 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:12.814 mke2fs 1.47.0 (5-Feb-2023) 00:15:12.814 Discarding device blocks: 0/4096 done 00:15:12.814 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:12.814 00:15:12.814 Allocating group tables: 0/1 done 00:15:12.814 Writing inode tables: 0/1 done 00:15:12.814 Creating journal (1024 blocks): done 00:15:12.814 Writing superblocks and filesystem accounting information: 0/1 done 00:15:12.814 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.814 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3873570 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 3873570 ']' 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 3873570 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.073 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3873570 00:15:13.332 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.332 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.332 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3873570' 00:15:13.332 killing process with pid 3873570 00:15:13.332 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 3873570 00:15:13.332 13:46:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 3873570 00:15:17.520 13:46:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:17.520 00:15:17.520 real 0m12.706s 00:15:17.520 user 0m14.847s 00:15:17.521 sys 0m2.888s 00:15:17.521 13:46:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.521 13:46:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:17.521 ************************************ 00:15:17.521 END TEST bdev_nbd 00:15:17.521 ************************************ 00:15:17.521 13:46:48 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:17.521 13:46:48 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:15:17.521 13:46:48 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:15:17.521 13:46:48 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:15:17.521 skipping fio tests on NVMe due to multi-ns failures. 00:15:17.521 13:46:48 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:17.521 13:46:48 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:17.521 13:46:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:17.521 13:46:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.521 13:46:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:17.521 ************************************ 00:15:17.521 START TEST bdev_verify 00:15:17.521 ************************************ 00:15:17.521 13:46:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:17.521 [2024-12-05 13:46:48.779109] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:17.521 [2024-12-05 13:46:48.779169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875292 ] 00:15:17.521 [2024-12-05 13:46:48.890977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:17.521 [2024-12-05 13:46:48.948578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.521 [2024-12-05 13:46:48.948572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.778 [2024-12-05 13:46:49.207438] 'OCF_Core' volume operations registered 00:15:17.778 [2024-12-05 13:46:49.207475] 'OCF_Cache' volume operations registered 00:15:17.778 [2024-12-05 13:46:49.211907] 'OCF Composite' volume operations registered 00:15:17.778 [2024-12-05 13:46:49.216372] 'SPDK_block_device' volume operations registered 00:15:21.066 Running I/O for 5 seconds... 00:15:22.750 22144.00 IOPS, 86.50 MiB/s [2024-12-05T12:46:55.650Z] 22016.00 IOPS, 86.00 MiB/s [2024-12-05T12:46:56.580Z] 22144.00 IOPS, 86.50 MiB/s [2024-12-05T12:46:57.538Z] 22112.00 IOPS, 86.38 MiB/s 00:15:26.012 Latency(us) 00:15:26.012 [2024-12-05T12:46:57.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.012 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.012 Verification LBA range: start 0x0 length 0xe8e0580 00:15:26.012 Nvme0n1p1 : 5.02 4512.80 17.63 0.00 0.00 28272.57 4957.94 23820.91 00:15:26.012 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.012 Verification LBA range: start 0xe8e0580 length 0xe8e0580 00:15:26.012 Nvme0n1p1 : 5.01 6514.13 25.45 0.00 0.00 19594.84 3034.60 16526.47 00:15:26.012 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.012 Verification LBA range: start 0x0 length 0xe8e057f 00:15:26.012 Nvme0n1p2 : 5.03 4507.22 17.61 0.00 0.00 28257.09 933.18 23934.89 00:15:26.012 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.012 Verification LBA range: start 0xe8e057f length 0xe8e057f 00:15:26.012 Nvme0n1p2 : 5.02 6523.97 25.48 0.00 0.00 19536.08 726.59 17780.20 00:15:26.012 [2024-12-05T12:46:57.538Z] =================================================================================================================== 00:15:26.012 [2024-12-05T12:46:57.538Z] Total : 22058.13 86.16 0.00 0.00 23125.63 726.59 23934.89 00:15:30.190 00:15:30.190 real 0m12.519s 00:15:30.190 user 0m23.276s 00:15:30.190 sys 0m0.632s 00:15:30.190 13:47:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.190 13:47:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:30.190 ************************************ 00:15:30.190 END TEST bdev_verify 00:15:30.190 ************************************ 00:15:30.190 13:47:01 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:30.190 13:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:30.191 13:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.191 13:47:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:30.191 ************************************ 00:15:30.191 START TEST bdev_verify_big_io 00:15:30.191 ************************************ 00:15:30.191 13:47:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:30.191 [2024-12-05 13:47:01.376591] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:30.191 [2024-12-05 13:47:01.376659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877013 ] 00:15:30.191 [2024-12-05 13:47:01.495613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.191 [2024-12-05 13:47:01.550758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.191 [2024-12-05 13:47:01.550765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.448 [2024-12-05 13:47:01.793555] 'OCF_Core' volume operations registered 00:15:30.448 [2024-12-05 13:47:01.793593] 'OCF_Cache' volume operations registered 00:15:30.448 [2024-12-05 13:47:01.797976] 'OCF Composite' volume operations registered 00:15:30.448 [2024-12-05 13:47:01.802435] 'SPDK_block_device' volume operations registered 00:15:33.728 Running I/O for 5 seconds... 00:15:36.190 1792.00 IOPS, 112.00 MiB/s [2024-12-05T12:47:09.090Z] 1664.00 IOPS, 104.00 MiB/s [2024-12-05T12:47:10.025Z] 1744.33 IOPS, 109.02 MiB/s [2024-12-05T12:47:10.025Z] 1856.00 IOPS, 116.00 MiB/s 00:15:38.499 Latency(us) 00:15:38.499 [2024-12-05T12:47:10.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.499 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:38.499 Verification LBA range: start 0x0 length 0xe8e058 00:15:38.499 Nvme0n1p1 : 5.24 317.37 19.84 0.00 0.00 393171.51 5983.72 421254.01 00:15:38.499 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:38.499 Verification LBA range: start 0xe8e058 length 0xe8e058 00:15:38.499 Nvme0n1p1 : 5.20 442.89 27.68 0.00 0.00 283553.76 3989.15 311837.38 00:15:38.499 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:38.499 Verification LBA range: start 0x0 length 0xe8e057 00:15:38.499 Nvme0n1p2 : 5.25 317.14 19.82 0.00 0.00 377261.66 6240.17 408488.74 00:15:38.499 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:38.499 Verification LBA range: start 0xe8e057 length 0xe8e057 00:15:38.499 Nvme0n1p2 : 5.21 442.31 27.64 0.00 0.00 275098.54 3405.02 300895.72 00:15:38.499 [2024-12-05T12:47:10.025Z] =================================================================================================================== 00:15:38.499 [2024-12-05T12:47:10.025Z] Total : 1519.70 94.98 0.00 0.00 323731.81 3405.02 421254.01 00:15:42.679 00:15:42.679 real 0m12.663s 00:15:42.679 user 0m23.552s 00:15:42.679 sys 0m0.663s 00:15:42.679 13:47:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.679 13:47:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 ************************************ 00:15:42.679 END TEST bdev_verify_big_io 00:15:42.679 ************************************ 00:15:42.679 13:47:14 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:42.679 13:47:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:42.679 13:47:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.679 13:47:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:42.679 ************************************ 00:15:42.679 START TEST bdev_write_zeroes 00:15:42.679 ************************************ 00:15:42.679 13:47:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:42.679 [2024-12-05 13:47:14.130322] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:42.679 [2024-12-05 13:47:14.130382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878656 ] 00:15:42.936 [2024-12-05 13:47:14.240518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.936 [2024-12-05 13:47:14.295806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.194 [2024-12-05 13:47:14.542953] 'OCF_Core' volume operations registered 00:15:43.194 [2024-12-05 13:47:14.542989] 'OCF_Cache' volume operations registered 00:15:43.194 [2024-12-05 13:47:14.547058] 'OCF Composite' volume operations registered 00:15:43.194 [2024-12-05 13:47:14.551214] 'SPDK_block_device' volume operations registered 00:15:46.476 Running I/O for 1 seconds... 00:15:47.042 45312.00 IOPS, 177.00 MiB/s 00:15:47.042 Latency(us) 00:15:47.042 [2024-12-05T12:47:18.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.042 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:47.042 Nvme0n1p1 : 1.01 22583.19 88.22 0.00 0.00 5653.91 3490.50 8149.26 00:15:47.042 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:47.042 Nvme0n1p2 : 1.02 22540.09 88.05 0.00 0.00 5653.77 2963.37 10029.86 00:15:47.042 [2024-12-05T12:47:18.568Z] =================================================================================================================== 00:15:47.042 [2024-12-05T12:47:18.568Z] Total : 45123.28 176.26 0.00 0.00 5653.84 2963.37 10029.86 00:15:51.226 00:15:51.226 real 0m8.373s 00:15:51.226 user 0m6.988s 00:15:51.226 sys 0m0.606s 00:15:51.226 13:47:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.226 13:47:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:51.226 ************************************ 00:15:51.226 END TEST bdev_write_zeroes 00:15:51.226 ************************************ 00:15:51.226 13:47:22 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:51.226 13:47:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:51.226 13:47:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.226 13:47:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:51.226 ************************************ 00:15:51.226 START TEST bdev_json_nonenclosed 00:15:51.226 ************************************ 00:15:51.226 13:47:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:51.226 [2024-12-05 13:47:22.585249] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:51.226 [2024-12-05 13:47:22.585307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879735 ] 00:15:51.226 [2024-12-05 13:47:22.705918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.484 [2024-12-05 13:47:22.762598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.484 [2024-12-05 13:47:22.762678] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:51.484 [2024-12-05 13:47:22.762697] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:51.484 [2024-12-05 13:47:22.762710] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.484 00:15:51.484 real 0m0.296s 00:15:51.484 user 0m0.177s 00:15:51.484 sys 0m0.116s 00:15:51.484 13:47:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.484 13:47:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:51.484 ************************************ 00:15:51.484 END TEST bdev_json_nonenclosed 00:15:51.484 ************************************ 00:15:51.484 13:47:22 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:51.484 13:47:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:51.484 13:47:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.484 13:47:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:51.484 ************************************ 00:15:51.484 START TEST bdev_json_nonarray 00:15:51.484 ************************************ 00:15:51.484 13:47:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:51.484 [2024-12-05 13:47:22.948877] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:51.484 [2024-12-05 13:47:22.948936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879917 ] 00:15:51.742 [2024-12-05 13:47:23.068692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.742 [2024-12-05 13:47:23.123988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.742 [2024-12-05 13:47:23.124069] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:51.743 [2024-12-05 13:47:23.124088] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:51.743 [2024-12-05 13:47:23.124101] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.743 00:15:51.743 real 0m0.293s 00:15:51.743 user 0m0.167s 00:15:51.743 sys 0m0.124s 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:51.743 ************************************ 00:15:51.743 END TEST bdev_json_nonarray 00:15:51.743 ************************************ 00:15:51.743 13:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:15:51.743 13:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:15:51.743 13:47:23 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:15:51.743 13:47:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:51.743 13:47:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.743 13:47:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:51.743 ************************************ 00:15:51.743 START TEST bdev_gpt_uuid 00:15:51.743 ************************************ 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3879939 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 3879939 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 3879939 ']' 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.743 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:52.001 [2024-12-05 13:47:23.292398] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:15:52.001 [2024-12-05 13:47:23.292448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879939 ] 00:15:52.001 [2024-12-05 13:47:23.400944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.001 [2024-12-05 13:47:23.459011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.260 [2024-12-05 13:47:23.677572] 'OCF_Core' volume operations registered 00:15:52.260 [2024-12-05 13:47:23.677608] 'OCF_Cache' volume operations registered 00:15:52.260 [2024-12-05 13:47:23.681660] 'OCF Composite' volume operations registered 00:15:52.260 [2024-12-05 13:47:23.685753] 'SPDK_block_device' volume operations registered 00:15:52.518 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.518 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:15:52.518 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:15:52.518 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.518 13:47:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:55.801 Some configs were skipped because the RPC state that can call them passed over. 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:15:55.801 { 00:15:55.801 "name": "Nvme0n1p1", 00:15:55.801 "aliases": [ 00:15:55.801 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:15:55.801 ], 00:15:55.801 "product_name": "GPT Disk", 00:15:55.801 "block_size": 512, 00:15:55.801 "num_blocks": 3907016704, 00:15:55.801 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:15:55.801 "assigned_rate_limits": { 00:15:55.801 "rw_ios_per_sec": 0, 00:15:55.801 "rw_mbytes_per_sec": 0, 00:15:55.801 "r_mbytes_per_sec": 0, 00:15:55.801 "w_mbytes_per_sec": 0 00:15:55.801 }, 00:15:55.801 "claimed": false, 00:15:55.801 "zoned": false, 00:15:55.801 "supported_io_types": { 00:15:55.801 "read": true, 00:15:55.801 "write": true, 00:15:55.801 "unmap": true, 00:15:55.801 "flush": true, 00:15:55.801 "reset": true, 00:15:55.801 "nvme_admin": false, 00:15:55.801 "nvme_io": false, 00:15:55.801 "nvme_io_md": false, 00:15:55.801 "write_zeroes": true, 00:15:55.801 "zcopy": false, 00:15:55.801 "get_zone_info": false, 00:15:55.801 "zone_management": false, 00:15:55.801 "zone_append": false, 00:15:55.801 "compare": false, 00:15:55.801 "compare_and_write": false, 00:15:55.801 "abort": true, 00:15:55.801 "seek_hole": false, 00:15:55.801 "seek_data": false, 00:15:55.801 "copy": false, 00:15:55.801 "nvme_iov_md": false 00:15:55.801 }, 00:15:55.801 "driver_specific": { 00:15:55.801 "gpt": { 00:15:55.801 "base_bdev": "Nvme0n1", 00:15:55.801 "offset_blocks": 2048, 00:15:55.801 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:15:55.801 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:15:55.801 "partition_name": "SPDK_TEST_first" 00:15:55.801 } 00:15:55.801 } 00:15:55.801 } 00:15:55.801 ]' 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.801 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:15:55.801 { 00:15:55.801 "name": "Nvme0n1p2", 00:15:55.802 "aliases": [ 00:15:55.802 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:15:55.802 ], 00:15:55.802 "product_name": "GPT Disk", 00:15:55.802 "block_size": 512, 00:15:55.802 "num_blocks": 3907016703, 00:15:55.802 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:15:55.802 "assigned_rate_limits": { 00:15:55.802 "rw_ios_per_sec": 0, 00:15:55.802 "rw_mbytes_per_sec": 0, 00:15:55.802 "r_mbytes_per_sec": 0, 00:15:55.802 "w_mbytes_per_sec": 0 00:15:55.802 }, 00:15:55.802 "claimed": false, 00:15:55.802 "zoned": false, 00:15:55.802 "supported_io_types": { 00:15:55.802 "read": true, 00:15:55.802 "write": true, 00:15:55.802 "unmap": true, 00:15:55.802 "flush": true, 00:15:55.802 "reset": true, 00:15:55.802 "nvme_admin": false, 00:15:55.802 "nvme_io": false, 00:15:55.802 "nvme_io_md": false, 00:15:55.802 "write_zeroes": true, 00:15:55.802 "zcopy": false, 00:15:55.802 "get_zone_info": false, 00:15:55.802 "zone_management": false, 00:15:55.802 "zone_append": false, 00:15:55.802 "compare": false, 00:15:55.802 "compare_and_write": false, 00:15:55.802 "abort": true, 00:15:55.802 "seek_hole": false, 00:15:55.802 "seek_data": false, 00:15:55.802 "copy": false, 00:15:55.802 "nvme_iov_md": false 00:15:55.802 }, 00:15:55.802 "driver_specific": { 00:15:55.802 "gpt": { 00:15:55.802 "base_bdev": "Nvme0n1", 00:15:55.802 "offset_blocks": 3907018752, 00:15:55.802 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:15:55.802 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:15:55.802 "partition_name": "SPDK_TEST_second" 00:15:55.802 } 00:15:55.802 } 00:15:55.802 } 00:15:55.802 ]' 00:15:55.802 13:47:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 3879939 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 3879939 ']' 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 3879939 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3879939 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3879939' 00:15:55.802 killing process with pid 3879939 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 3879939 00:15:55.802 13:47:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 3879939 00:15:59.982 00:15:59.982 real 0m8.115s 00:15:59.982 user 0m7.127s 00:15:59.982 sys 0m0.856s 00:15:59.982 13:47:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.982 13:47:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:59.982 ************************************ 00:15:59.982 END TEST bdev_gpt_uuid 00:15:59.982 ************************************ 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:15:59.982 13:47:31 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:16:03.267 Waiting for block devices as requested 00:16:03.267 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:16:03.528 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:16:03.528 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:16:03.528 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:16:03.787 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:16:03.787 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:16:03.787 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:16:04.046 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:16:04.046 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:16:04.046 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:16:04.305 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:16:04.305 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:16:04.305 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:16:04.563 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:16:04.563 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:16:04.563 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:16:04.822 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:16:05.755 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:16:05.755 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:16:06.013 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:16:06.013 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:16:06.013 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:06.013 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:06.013 13:47:37 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:16:06.013 00:16:06.013 real 1m40.874s 00:16:06.013 user 2m12.225s 00:16:06.013 sys 0m18.238s 00:16:06.013 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.013 13:47:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:06.013 ************************************ 00:16:06.013 END TEST blockdev_nvme_gpt 00:16:06.013 ************************************ 00:16:06.013 13:47:37 -- spdk/autotest.sh@212 -- # run_test nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh 00:16:06.013 13:47:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:06.013 13:47:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.013 13:47:37 -- common/autotest_common.sh@10 -- # set +x 00:16:06.013 ************************************ 00:16:06.013 START TEST nvme 00:16:06.013 ************************************ 00:16:06.013 13:47:37 nvme -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh 00:16:06.013 * Looking for test storage... 00:16:06.013 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:16:06.013 13:47:37 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:06.013 13:47:37 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:06.013 13:47:37 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:06.272 13:47:37 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.272 13:47:37 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.272 13:47:37 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.272 13:47:37 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.272 13:47:37 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.272 13:47:37 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.272 13:47:37 nvme -- scripts/common.sh@344 -- # case "$op" in 00:16:06.272 13:47:37 nvme -- scripts/common.sh@345 -- # : 1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.272 13:47:37 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.272 13:47:37 nvme -- scripts/common.sh@365 -- # decimal 1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@353 -- # local d=1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.272 13:47:37 nvme -- scripts/common.sh@355 -- # echo 1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.272 13:47:37 nvme -- scripts/common.sh@366 -- # decimal 2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@353 -- # local d=2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.272 13:47:37 nvme -- scripts/common.sh@355 -- # echo 2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.272 13:47:37 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.272 13:47:37 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.272 13:47:37 nvme -- scripts/common.sh@368 -- # return 0 00:16:06.272 13:47:37 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.272 13:47:37 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.272 --rc genhtml_branch_coverage=1 00:16:06.272 --rc genhtml_function_coverage=1 00:16:06.272 --rc genhtml_legend=1 00:16:06.272 --rc geninfo_all_blocks=1 00:16:06.272 --rc geninfo_unexecuted_blocks=1 00:16:06.272 00:16:06.272 ' 00:16:06.272 13:47:37 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.272 --rc genhtml_branch_coverage=1 00:16:06.272 --rc genhtml_function_coverage=1 00:16:06.272 --rc genhtml_legend=1 00:16:06.272 --rc geninfo_all_blocks=1 00:16:06.272 --rc geninfo_unexecuted_blocks=1 00:16:06.272 00:16:06.272 ' 00:16:06.272 13:47:37 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.272 --rc genhtml_branch_coverage=1 00:16:06.272 --rc genhtml_function_coverage=1 00:16:06.272 --rc genhtml_legend=1 00:16:06.272 --rc geninfo_all_blocks=1 00:16:06.272 --rc geninfo_unexecuted_blocks=1 00:16:06.272 00:16:06.272 ' 00:16:06.272 13:47:37 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:06.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.272 --rc genhtml_branch_coverage=1 00:16:06.272 --rc genhtml_function_coverage=1 00:16:06.272 --rc genhtml_legend=1 00:16:06.272 --rc geninfo_all_blocks=1 00:16:06.272 --rc geninfo_unexecuted_blocks=1 00:16:06.272 00:16:06.272 ' 00:16:06.272 13:47:37 nvme -- nvme/nvme.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:16:09.566 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:16:09.566 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:16:09.825 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:16:13.112 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:16:14.043 13:47:45 nvme -- nvme/nvme.sh@79 -- # uname 00:16:14.043 13:47:45 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:16:14.043 13:47:45 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:16:14.043 13:47:45 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1075 -- # stubpid=3884146 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:16:14.043 Waiting for stub to ready for secondary processes... 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/3884146 ]] 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:14.043 13:47:45 nvme -- common/autotest_common.sh@1074 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:16:14.043 [2024-12-05 13:47:45.486650] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:16:14.043 [2024-12-05 13:47:45.486712] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:16:14.975 13:47:46 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:14.975 13:47:46 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/3884146 ]] 00:16:14.975 13:47:46 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:15.562 [2024-12-05 13:47:46.970625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.562 [2024-12-05 13:47:47.015855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.562 [2024-12-05 13:47:47.015940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.562 [2024-12-05 13:47:47.015943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.126 13:47:47 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:16.126 13:47:47 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/3884146 ]] 00:16:16.126 13:47:47 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:17.053 13:47:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:17.053 13:47:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/3884146 ]] 00:16:17.053 13:47:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:17.987 13:47:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:17.987 13:47:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/3884146 ]] 00:16:17.987 13:47:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:18.553 [2024-12-05 13:47:50.024045] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:16:18.553 [2024-12-05 13:47:50.024089] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:16:18.553 [2024-12-05 13:47:50.040976] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:16:18.553 [2024-12-05 13:47:50.041077] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:16:19.185 13:47:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:19.185 13:47:50 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:16:19.185 done. 00:16:19.185 13:47:50 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:19.185 13:47:50 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:16:19.185 13:47:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.185 13:47:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.185 ************************************ 00:16:19.185 START TEST nvme_reset 00:16:19.185 ************************************ 00:16:19.185 13:47:50 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:19.481 [2024-12-05 13:47:50.858776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858894] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.858996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.481 [2024-12-05 13:47:50.859931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.872995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.779 [2024-12-05 13:47:55.873012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.780 [2024-12-05 13:47:55.873693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886894] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.886992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887192] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:30.048 [2024-12-05 13:48:00.887307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.313 Initializing NVMe Controllers 00:16:35.313 Associating INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) with lcore 0 00:16:35.313 Initialization complete. Launching workers. 00:16:35.313 Starting thread on core 0 00:16:35.313 ======================================================== 00:16:35.313 621760 IO completed successfully 00:16:35.313 64 IO completed with error 00:16:35.313 -------------------------------------------------------- 00:16:35.313 621824 IO completed total 00:16:35.313 621824 IO submitted 00:16:35.313 Starting thread on core 0 00:16:35.313 ======================================================== 00:16:35.313 622016 IO completed successfully 00:16:35.313 64 IO completed with error 00:16:35.313 -------------------------------------------------------- 00:16:35.313 622080 IO completed total 00:16:35.313 622080 IO submitted 00:16:35.313 Starting thread on core 0 00:16:35.313 ======================================================== 00:16:35.313 622656 IO completed successfully 00:16:35.313 64 IO completed with error 00:16:35.313 -------------------------------------------------------- 00:16:35.313 622720 IO completed total 00:16:35.313 622720 IO submitted 00:16:35.313 00:16:35.313 real 0m15.414s 00:16:35.313 user 0m15.085s 00:16:35.313 sys 0m0.199s 00:16:35.313 13:48:05 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.313 13:48:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:16:35.313 ************************************ 00:16:35.313 END TEST nvme_reset 00:16:35.313 ************************************ 00:16:35.313 13:48:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:16:35.313 13:48:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.313 13:48:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.313 13:48:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:35.313 ************************************ 00:16:35.313 START TEST nvme_identify 00:16:35.313 ************************************ 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:16:35.313 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:16:35.313 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:16:35.313 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:16:35.313 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:16:35.313 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:16:35.314 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -i 0 00:16:35.314 ===================================================== 00:16:35.314 NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:16:35.314 ===================================================== 00:16:35.314 Controller Capabilities/Features 00:16:35.314 ================================ 00:16:35.314 Vendor ID: 8086 00:16:35.314 Subsystem Vendor ID: 8086 00:16:35.314 Serial Number: BTLJ8234018V4P0DGN 00:16:35.314 Model Number: INTEL SSDPE2KX040T8 00:16:35.314 Firmware Version: VDV1Y295 00:16:35.314 Recommended Arb Burst: 0 00:16:35.314 IEEE OUI Identifier: e4 d2 5c 00:16:35.314 Multi-path I/O 00:16:35.314 May have multiple subsystem ports: No 00:16:35.314 May have multiple controllers: No 00:16:35.314 Associated with SR-IOV VF: No 00:16:35.314 Max Data Transfer Size: 131072 00:16:35.314 Max Number of Namespaces: 128 00:16:35.314 Max Number of I/O Queues: 128 00:16:35.314 NVMe Specification Version (VS): 1.2 00:16:35.314 NVMe Specification Version (Identify): 1.2 00:16:35.314 Maximum Queue Entries: 4096 00:16:35.314 Contiguous Queues Required: Yes 00:16:35.314 Arbitration Mechanisms Supported 00:16:35.314 Weighted Round Robin: Supported 00:16:35.314 Vendor Specific: Not Supported 00:16:35.314 Reset Timeout: 60000 ms 00:16:35.314 Doorbell Stride: 4 bytes 00:16:35.314 NVM Subsystem Reset: Not Supported 00:16:35.314 Command Sets Supported 00:16:35.314 NVM Command Set: Supported 00:16:35.314 Boot Partition: Not Supported 00:16:35.314 Memory Page Size Minimum: 4096 bytes 00:16:35.314 Memory Page Size Maximum: 4096 bytes 00:16:35.314 Persistent Memory Region: Not Supported 00:16:35.314 Optional Asynchronous Events Supported 00:16:35.314 Namespace Attribute Notices: Not Supported 00:16:35.314 Firmware Activation Notices: Supported 00:16:35.314 ANA Change Notices: Not Supported 00:16:35.314 PLE Aggregate Log Change Notices: Not Supported 00:16:35.314 LBA Status Info Alert Notices: Not Supported 00:16:35.314 EGE Aggregate Log Change Notices: Not Supported 00:16:35.314 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.314 Zone Descriptor Change Notices: Not Supported 00:16:35.314 Discovery Log Change Notices: Not Supported 00:16:35.314 Controller Attributes 00:16:35.314 128-bit Host Identifier: Not Supported 00:16:35.314 Non-Operational Permissive Mode: Not Supported 00:16:35.314 NVM Sets: Not Supported 00:16:35.314 Read Recovery Levels: Not Supported 00:16:35.314 Endurance Groups: Not Supported 00:16:35.314 Predictable Latency Mode: Not Supported 00:16:35.314 Traffic Based Keep ALive: Not Supported 00:16:35.314 Namespace Granularity: Not Supported 00:16:35.314 SQ Associations: Not Supported 00:16:35.314 UUID List: Not Supported 00:16:35.314 Multi-Domain Subsystem: Not Supported 00:16:35.314 Fixed Capacity Management: Not Supported 00:16:35.314 Variable Capacity Management: Not Supported 00:16:35.314 Delete Endurance Group: Not Supported 00:16:35.314 Delete NVM Set: Not Supported 00:16:35.314 Extended LBA Formats Supported: Not Supported 00:16:35.314 Flexible Data Placement Supported: Not Supported 00:16:35.314 00:16:35.314 Controller Memory Buffer Support 00:16:35.314 ================================ 00:16:35.314 Supported: No 00:16:35.314 00:16:35.314 Persistent Memory Region Support 00:16:35.314 ================================ 00:16:35.314 Supported: No 00:16:35.314 00:16:35.314 Admin Command Set Attributes 00:16:35.314 ============================ 00:16:35.314 Security Send/Receive: Not Supported 00:16:35.314 Format NVM: Supported 00:16:35.314 Firmware Activate/Download: Supported 00:16:35.314 Namespace Management: Supported 00:16:35.314 Device Self-Test: Not Supported 00:16:35.314 Directives: Not Supported 00:16:35.314 NVMe-MI: Not Supported 00:16:35.314 Virtualization Management: Not Supported 00:16:35.314 Doorbell Buffer Config: Not Supported 00:16:35.314 Get LBA Status Capability: Not Supported 00:16:35.314 Command & Feature Lockdown Capability: Not Supported 00:16:35.314 Abort Command Limit: 4 00:16:35.314 Async Event Request Limit: 4 00:16:35.314 Number of Firmware Slots: 4 00:16:35.314 Firmware Slot 1 Read-Only: No 00:16:35.314 Firmware Activation Without Reset: Yes 00:16:35.314 Multiple Update Detection Support: No 00:16:35.314 Firmware Update Granularity: No Information Provided 00:16:35.314 Per-Namespace SMART Log: No 00:16:35.314 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.314 Subsystem NQN: 00:16:35.314 Command Effects Log Page: Supported 00:16:35.314 Get Log Page Extended Data: Supported 00:16:35.314 Telemetry Log Pages: Supported 00:16:35.314 Persistent Event Log Pages: Not Supported 00:16:35.314 Supported Log Pages Log Page: May Support 00:16:35.314 Commands Supported & Effects Log Page: Not Supported 00:16:35.314 Feature Identifiers & Effects Log Page:May Support 00:16:35.314 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.314 Data Area 4 for Telemetry Log: Not Supported 00:16:35.314 Error Log Page Entries Supported: 64 00:16:35.314 Keep Alive: Not Supported 00:16:35.314 00:16:35.314 NVM Command Set Attributes 00:16:35.314 ========================== 00:16:35.314 Submission Queue Entry Size 00:16:35.314 Max: 64 00:16:35.314 Min: 64 00:16:35.314 Completion Queue Entry Size 00:16:35.314 Max: 16 00:16:35.314 Min: 16 00:16:35.314 Number of Namespaces: 128 00:16:35.314 Compare Command: Not Supported 00:16:35.314 Write Uncorrectable Command: Supported 00:16:35.314 Dataset Management Command: Supported 00:16:35.314 Write Zeroes Command: Not Supported 00:16:35.314 Set Features Save Field: Not Supported 00:16:35.314 Reservations: Not Supported 00:16:35.314 Timestamp: Not Supported 00:16:35.314 Copy: Not Supported 00:16:35.314 Volatile Write Cache: Not Present 00:16:35.314 Atomic Write Unit (Normal): 1 00:16:35.314 Atomic Write Unit (PFail): 1 00:16:35.314 Atomic Compare & Write Unit: 1 00:16:35.314 Fused Compare & Write: Not Supported 00:16:35.314 Scatter-Gather List 00:16:35.314 SGL Command Set: Not Supported 00:16:35.314 SGL Keyed: Not Supported 00:16:35.314 SGL Bit Bucket Descriptor: Not Supported 00:16:35.314 SGL Metadata Pointer: Not Supported 00:16:35.315 Oversized SGL: Not Supported 00:16:35.315 SGL Metadata Address: Not Supported 00:16:35.315 SGL Offset: Not Supported 00:16:35.315 Transport SGL Data Block: Not Supported 00:16:35.315 Replay Protected Memory Block: Not Supported 00:16:35.315 00:16:35.315 Firmware Slot Information 00:16:35.315 ========================= 00:16:35.315 Active slot: 1 00:16:35.315 Slot 1 Firmware Revision: VDV1Y295 00:16:35.315 00:16:35.315 00:16:35.315 Commands Supported and Effects 00:16:35.315 ============================== 00:16:35.315 Admin Commands 00:16:35.315 -------------- 00:16:35.315 Delete I/O Submission Queue (00h): Supported 00:16:35.315 Create I/O Submission Queue (01h): Supported All-NS-Exclusive 00:16:35.315 Get Log Page (02h): Supported 00:16:35.315 Delete I/O Completion Queue (04h): Supported 00:16:35.315 Create I/O Completion Queue (05h): Supported All-NS-Exclusive 00:16:35.315 Identify (06h): Supported 00:16:35.315 Abort (08h): Supported 00:16:35.315 Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 00:16:35.315 Get Features (0Ah): Supported 00:16:35.315 Asynchronous Event Request (0Ch): Supported 00:16:35.315 Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive 00:16:35.315 Firmware Commit (10h): Supported Ctrlr-Cap-Change 00:16:35.315 Firmware Image Download (11h): Supported 00:16:35.315 Namespace Attachment (15h): Supported Per-NS-Exclusive 00:16:35.315 Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive 00:16:35.315 Vendor specific (C8h): Supported 00:16:35.315 Vendor specific (D2h): Supported 00:16:35.315 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:35.315 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:35.315 I/O Commands 00:16:35.315 ------------ 00:16:35.315 Flush (00h): Supported LBA-Change 00:16:35.315 Write (01h): Supported LBA-Change 00:16:35.315 Read (02h): Supported 00:16:35.315 Write Uncorrectable (04h): Supported LBA-Change 00:16:35.315 Dataset Management (09h): Supported LBA-Change 00:16:35.315 00:16:35.315 Error Log 00:16:35.315 ========= 00:16:35.315 Entry: 0 00:16:35.315 Error Count: 0x1f02 00:16:35.315 Submission Queue Id: 0x2 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 1 00:16:35.315 Error Count: 0x1f01 00:16:35.315 Submission Queue Id: 0x2 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 2 00:16:35.315 Error Count: 0x1f00 00:16:35.315 Submission Queue Id: 0x0 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 3 00:16:35.315 Error Count: 0x1eff 00:16:35.315 Submission Queue Id: 0x2 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 4 00:16:35.315 Error Count: 0x1efe 00:16:35.315 Submission Queue Id: 0x2 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 5 00:16:35.315 Error Count: 0x1efd 00:16:35.315 Submission Queue Id: 0x0 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 6 00:16:35.315 Error Count: 0x1efc 00:16:35.315 Submission Queue Id: 0x2 00:16:35.315 Command Id: 0xffff 00:16:35.315 Phase Bit: 0 00:16:35.315 Status Code: 0x6 00:16:35.315 Status Code Type: 0x0 00:16:35.315 Do Not Retry: 1 00:16:35.315 Error Location: 0xffff 00:16:35.315 LBA: 0x0 00:16:35.315 Namespace: 0xffffffff 00:16:35.315 Vendor Log Page: 0x0 00:16:35.315 ----------- 00:16:35.315 Entry: 7 00:16:35.315 Error Count: 0x1efb 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 8 00:16:35.316 Error Count: 0x1efa 00:16:35.316 Submission Queue Id: 0x0 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 9 00:16:35.316 Error Count: 0x1ef9 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 10 00:16:35.316 Error Count: 0x1ef8 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 11 00:16:35.316 Error Count: 0x1ef7 00:16:35.316 Submission Queue Id: 0x0 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 12 00:16:35.316 Error Count: 0x1ef6 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 13 00:16:35.316 Error Count: 0x1ef5 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 14 00:16:35.316 Error Count: 0x1ef4 00:16:35.316 Submission Queue Id: 0x0 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 15 00:16:35.316 Error Count: 0x1ef3 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 16 00:16:35.316 Error Count: 0x1ef2 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 17 00:16:35.316 Error Count: 0x1ef1 00:16:35.316 Submission Queue Id: 0x0 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 18 00:16:35.316 Error Count: 0x1ef0 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 19 00:16:35.316 Error Count: 0x1eef 00:16:35.316 Submission Queue Id: 0x2 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.316 Status Code: 0x6 00:16:35.316 Status Code Type: 0x0 00:16:35.316 Do Not Retry: 1 00:16:35.316 Error Location: 0xffff 00:16:35.316 LBA: 0x0 00:16:35.316 Namespace: 0xffffffff 00:16:35.316 Vendor Log Page: 0x0 00:16:35.316 ----------- 00:16:35.316 Entry: 20 00:16:35.316 Error Count: 0x1eee 00:16:35.316 Submission Queue Id: 0x0 00:16:35.316 Command Id: 0xffff 00:16:35.316 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 21 00:16:35.317 Error Count: 0x1eed 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 22 00:16:35.317 Error Count: 0x1eec 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 23 00:16:35.317 Error Count: 0x1eeb 00:16:35.317 Submission Queue Id: 0x0 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 24 00:16:35.317 Error Count: 0x1eea 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 25 00:16:35.317 Error Count: 0x1ee9 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 26 00:16:35.317 Error Count: 0x1ee8 00:16:35.317 Submission Queue Id: 0x0 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 27 00:16:35.317 Error Count: 0x1ee7 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 28 00:16:35.317 Error Count: 0x1ee6 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 29 00:16:35.317 Error Count: 0x1ee5 00:16:35.317 Submission Queue Id: 0x0 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 30 00:16:35.317 Error Count: 0x1ee4 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 31 00:16:35.317 Error Count: 0x1ee3 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 32 00:16:35.317 Error Count: 0x1ee2 00:16:35.317 Submission Queue Id: 0x0 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.317 Status Code: 0x6 00:16:35.317 Status Code Type: 0x0 00:16:35.317 Do Not Retry: 1 00:16:35.317 Error Location: 0xffff 00:16:35.317 LBA: 0x0 00:16:35.317 Namespace: 0xffffffff 00:16:35.317 Vendor Log Page: 0x0 00:16:35.317 ----------- 00:16:35.317 Entry: 33 00:16:35.317 Error Count: 0x1ee1 00:16:35.317 Submission Queue Id: 0x2 00:16:35.317 Command Id: 0xffff 00:16:35.317 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 34 00:16:35.318 Error Count: 0x1ee0 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 35 00:16:35.318 Error Count: 0x1edf 00:16:35.318 Submission Queue Id: 0x0 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 36 00:16:35.318 Error Count: 0x1ede 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 37 00:16:35.318 Error Count: 0x1edd 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 38 00:16:35.318 Error Count: 0x1edc 00:16:35.318 Submission Queue Id: 0x0 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 39 00:16:35.318 Error Count: 0x1edb 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 40 00:16:35.318 Error Count: 0x1eda 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 41 00:16:35.318 Error Count: 0x1ed9 00:16:35.318 Submission Queue Id: 0x0 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 42 00:16:35.318 Error Count: 0x1ed8 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 43 00:16:35.318 Error Count: 0x1ed7 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 44 00:16:35.318 Error Count: 0x1ed6 00:16:35.318 Submission Queue Id: 0x0 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 45 00:16:35.318 Error Count: 0x1ed5 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.318 LBA: 0x0 00:16:35.318 Namespace: 0xffffffff 00:16:35.318 Vendor Log Page: 0x0 00:16:35.318 ----------- 00:16:35.318 Entry: 46 00:16:35.318 Error Count: 0x1ed4 00:16:35.318 Submission Queue Id: 0x2 00:16:35.318 Command Id: 0xffff 00:16:35.318 Phase Bit: 0 00:16:35.318 Status Code: 0x6 00:16:35.318 Status Code Type: 0x0 00:16:35.318 Do Not Retry: 1 00:16:35.318 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 47 00:16:35.319 Error Count: 0x1ed3 00:16:35.319 Submission Queue Id: 0x0 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 48 00:16:35.319 Error Count: 0x1ed2 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 49 00:16:35.319 Error Count: 0x1ed1 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 50 00:16:35.319 Error Count: 0x1ed0 00:16:35.319 Submission Queue Id: 0x0 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 51 00:16:35.319 Error Count: 0x1ecf 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 52 00:16:35.319 Error Count: 0x1ece 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 53 00:16:35.319 Error Count: 0x1ecd 00:16:35.319 Submission Queue Id: 0x0 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 54 00:16:35.319 Error Count: 0x1ecc 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 55 00:16:35.319 Error Count: 0x1ecb 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 56 00:16:35.319 Error Count: 0x1eca 00:16:35.319 Submission Queue Id: 0x0 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 57 00:16:35.319 Error Count: 0x1ec9 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 58 00:16:35.319 Error Count: 0x1ec8 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 59 00:16:35.319 Error Count: 0x1ec7 00:16:35.319 Submission Queue Id: 0x0 00:16:35.319 Command Id: 0xffff 00:16:35.319 Phase Bit: 0 00:16:35.319 Status Code: 0x6 00:16:35.319 Status Code Type: 0x0 00:16:35.319 Do Not Retry: 1 00:16:35.319 Error Location: 0xffff 00:16:35.319 LBA: 0x0 00:16:35.319 Namespace: 0xffffffff 00:16:35.319 Vendor Log Page: 0x0 00:16:35.319 ----------- 00:16:35.319 Entry: 60 00:16:35.319 Error Count: 0x1ec6 00:16:35.319 Submission Queue Id: 0x2 00:16:35.319 Command Id: 0xffff 00:16:35.320 Phase Bit: 0 00:16:35.320 Status Code: 0x6 00:16:35.320 Status Code Type: 0x0 00:16:35.320 Do Not Retry: 1 00:16:35.320 Error Location: 0xffff 00:16:35.320 LBA: 0x0 00:16:35.320 Namespace: 0xffffffff 00:16:35.320 Vendor Log Page: 0x0 00:16:35.320 ----------- 00:16:35.320 Entry: 61 00:16:35.320 Error Count: 0x1ec5 00:16:35.320 Submission Queue Id: 0x2 00:16:35.320 Command Id: 0xffff 00:16:35.320 Phase Bit: 0 00:16:35.320 Status Code: 0x6 00:16:35.320 Status Code Type: 0x0 00:16:35.320 Do Not Retry: 1 00:16:35.320 Error Location: 0xffff 00:16:35.320 LBA: 0x0 00:16:35.320 Namespace: 0xffffffff 00:16:35.320 Vendor Log Page: 0x0 00:16:35.320 ----------- 00:16:35.320 Entry: 62 00:16:35.320 Error Count: 0x1ec4 00:16:35.320 Submission Queue Id: 0x0 00:16:35.320 Command Id: 0xffff 00:16:35.320 Phase Bit: 0 00:16:35.320 Status Code: 0x6 00:16:35.320 Status Code Type: 0x0 00:16:35.320 Do Not Retry: 1 00:16:35.320 Error Location: 0xffff 00:16:35.320 LBA: 0x0 00:16:35.320 Namespace: 0xffffffff 00:16:35.320 Vendor Log Page: 0x0 00:16:35.320 ----------- 00:16:35.320 Entry: 63 00:16:35.320 Error Count: 0x1ec3 00:16:35.320 Submission Queue Id: 0x2 00:16:35.320 Command Id: 0xffff 00:16:35.320 Phase Bit: 0 00:16:35.320 Status Code: 0x6 00:16:35.320 Status Code Type: 0x0 00:16:35.320 Do Not Retry: 1 00:16:35.320 Error Location: 0xffff 00:16:35.320 LBA: 0x0 00:16:35.320 Namespace: 0xffffffff 00:16:35.320 Vendor Log Page: 0x0 00:16:35.320 00:16:35.320 Arbitration 00:16:35.320 =========== 00:16:35.320 Arbitration Burst: 1 00:16:35.320 Low Priority Weight: 1 00:16:35.320 Medium Priority Weight: 1 00:16:35.320 High Priority Weight: 1 00:16:35.320 00:16:35.320 Power Management 00:16:35.320 ================ 00:16:35.320 Number of Power States: 1 00:16:35.320 Current Power State: Power State #0 00:16:35.320 Power State #0: 00:16:35.320 Max Power: 20.00 W 00:16:35.320 Non-Operational State: Operational 00:16:35.320 Entry Latency: Not Reported 00:16:35.320 Exit Latency: Not Reported 00:16:35.320 Relative Read Throughput: 0 00:16:35.320 Relative Read Latency: 0 00:16:35.320 Relative Write Throughput: 0 00:16:35.320 Relative Write Latency: 0 00:16:35.320 Idle Power: Not Reported 00:16:35.320 Active Power: Not Reported 00:16:35.320 Non-Operational Permissive Mode: Not Supported 00:16:35.320 00:16:35.320 Health Information 00:16:35.320 ================== 00:16:35.320 Critical Warnings: 00:16:35.320 Available Spare Space: OK 00:16:35.320 Temperature: OK 00:16:35.320 Device Reliability: OK 00:16:35.320 Read Only: No 00:16:35.320 Volatile Memory Backup: OK 00:16:35.320 Current Temperature: 308 Kelvin (35 Celsius) 00:16:35.320 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:35.320 Available Spare: 100% 00:16:35.320 Available Spare Threshold: 10% 00:16:35.320 Life Percentage Used: 6% 00:16:35.320 Data Units Read: 103439399 00:16:35.320 Data Units Written: 227621947 00:16:35.320 Host Read Commands: 6940983620 00:16:35.320 Host Write Commands: 8109420884 00:16:35.320 Controller Busy Time: 604 minutes 00:16:35.320 Power Cycles: 97 00:16:35.320 Power On Hours: 39056 hours 00:16:35.320 Unsafe Shutdowns: 77 00:16:35.320 Unrecoverable Media Errors: 0 00:16:35.320 Lifetime Error Log Entries: 7938 00:16:35.320 Warning Temperature Time: 474 minutes 00:16:35.320 Critical Temperature Time: 0 minutes 00:16:35.320 00:16:35.320 Number of Queues 00:16:35.320 ================ 00:16:35.320 Number of I/O Submission Queues: 128 00:16:35.320 Number of I/O Completion Queues: 128 00:16:35.320 00:16:35.320 Intel Health Information 00:16:35.320 ================== 00:16:35.320 Program Fail Count: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 0 00:16:35.320 Erase Fail Count: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 0 00:16:35.320 Wear Leveling Count: 00:16:35.320 Normalized Value : 94 00:16:35.320 Current Raw Value: 00:16:35.320 Min: 91 00:16:35.320 Max: 321 00:16:35.320 Avg: 301 00:16:35.320 End to End Error Detection Count: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 0 00:16:35.320 CRC Error Count: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 0 00:16:35.320 Timed Workload, Media Wear: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 65535 00:16:35.320 Timed Workload, Host Read/Write Ratio: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 65535% 00:16:35.320 Timed Workload, Timer: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 65535 00:16:35.320 Thermal Throttle Status: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 00:16:35.320 Percentage: 0% 00:16:35.320 Throttling Event Count: 0 00:16:35.320 Retry Buffer Overflow Counter: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 0 00:16:35.320 PLL Lock Loss Count: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 0 00:16:35.320 NAND Bytes Written: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 21353997 00:16:35.320 Host Bytes Written: 00:16:35.320 Normalized Value : 100 00:16:35.320 Current Raw Value: 3473235 00:16:35.320 00:16:35.320 Intel Temperature Information 00:16:35.320 ================== 00:16:35.320 Current Temperature: 35 00:16:35.320 Overtemp shutdown Flag for last critical component temperature: 0 00:16:35.320 Overtemp shutdown Flag for life critical component temperature: 0 00:16:35.320 Highest temperature: 63 00:16:35.320 Lowest temperature: 18 00:16:35.320 Specified Maximum Operating Temperature: 70 00:16:35.320 Specified Minimum Operating Temperature: 0 00:16:35.320 Estimated offset: 0 00:16:35.320 00:16:35.321 00:16:35.321 Intel Marketing Information 00:16:35.321 ================== 00:16:35.321 Marketing Product Information: Intel(R) SSD DC P4510 Series 00:16:35.321 00:16:35.321 00:16:35.321 Active Namespaces 00:16:35.321 ================= 00:16:35.321 Namespace ID:1 00:16:35.321 Error Recovery Timeout: Unlimited 00:16:35.321 Command Set Identifier: NVM (00h) 00:16:35.321 Deallocate: Supported 00:16:35.321 Deallocated/Unwritten Error: Not Supported 00:16:35.321 Deallocated Read Value: All 0x00 00:16:35.321 Deallocate in Write Zeroes: Not Supported 00:16:35.321 Deallocated Guard Field: 0xFFFF 00:16:35.321 Flush: Not Supported 00:16:35.321 Reservation: Not Supported 00:16:35.321 Namespace Sharing Capabilities: Private 00:16:35.321 Size (in LBAs): 7814037168 (3726GiB) 00:16:35.321 Capacity (in LBAs): 7814037168 (3726GiB) 00:16:35.321 Utilization (in LBAs): 7814037168 (3726GiB) 00:16:35.321 NGUID: 01000000D91400000000000000000000 00:16:35.321 EUI64: 000000000000D914 00:16:35.321 Thin Provisioning: Not Supported 00:16:35.321 Per-NS Atomic Units: No 00:16:35.321 NGUID/EUI64 Never Reused: No 00:16:35.321 Namespace Write Protected: No 00:16:35.321 Number of LBA Formats: 2 00:16:35.321 Current LBA Format: LBA Format #00 00:16:35.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.321 LBA Format #01: Data Size: 4096 Metadata Size: 0 00:16:35.321 00:16:35.321 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:35.321 13:48:06 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:16:35.321 ===================================================== 00:16:35.321 NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:16:35.321 ===================================================== 00:16:35.321 Controller Capabilities/Features 00:16:35.321 ================================ 00:16:35.321 Vendor ID: 8086 00:16:35.321 Subsystem Vendor ID: 8086 00:16:35.321 Serial Number: BTLJ8234018V4P0DGN 00:16:35.321 Model Number: INTEL SSDPE2KX040T8 00:16:35.321 Firmware Version: VDV1Y295 00:16:35.321 Recommended Arb Burst: 0 00:16:35.321 IEEE OUI Identifier: e4 d2 5c 00:16:35.321 Multi-path I/O 00:16:35.321 May have multiple subsystem ports: No 00:16:35.321 May have multiple controllers: No 00:16:35.321 Associated with SR-IOV VF: No 00:16:35.321 Max Data Transfer Size: 131072 00:16:35.321 Max Number of Namespaces: 128 00:16:35.321 Max Number of I/O Queues: 128 00:16:35.321 NVMe Specification Version (VS): 1.2 00:16:35.321 NVMe Specification Version (Identify): 1.2 00:16:35.321 Maximum Queue Entries: 4096 00:16:35.321 Contiguous Queues Required: Yes 00:16:35.321 Arbitration Mechanisms Supported 00:16:35.321 Weighted Round Robin: Supported 00:16:35.321 Vendor Specific: Not Supported 00:16:35.321 Reset Timeout: 60000 ms 00:16:35.321 Doorbell Stride: 4 bytes 00:16:35.321 NVM Subsystem Reset: Not Supported 00:16:35.321 Command Sets Supported 00:16:35.321 NVM Command Set: Supported 00:16:35.321 Boot Partition: Not Supported 00:16:35.321 Memory Page Size Minimum: 4096 bytes 00:16:35.321 Memory Page Size Maximum: 4096 bytes 00:16:35.321 Persistent Memory Region: Not Supported 00:16:35.321 Optional Asynchronous Events Supported 00:16:35.321 Namespace Attribute Notices: Not Supported 00:16:35.321 Firmware Activation Notices: Supported 00:16:35.321 ANA Change Notices: Not Supported 00:16:35.321 PLE Aggregate Log Change Notices: Not Supported 00:16:35.321 LBA Status Info Alert Notices: Not Supported 00:16:35.321 EGE Aggregate Log Change Notices: Not Supported 00:16:35.321 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.321 Zone Descriptor Change Notices: Not Supported 00:16:35.321 Discovery Log Change Notices: Not Supported 00:16:35.321 Controller Attributes 00:16:35.321 128-bit Host Identifier: Not Supported 00:16:35.321 Non-Operational Permissive Mode: Not Supported 00:16:35.321 NVM Sets: Not Supported 00:16:35.321 Read Recovery Levels: Not Supported 00:16:35.321 Endurance Groups: Not Supported 00:16:35.321 Predictable Latency Mode: Not Supported 00:16:35.321 Traffic Based Keep ALive: Not Supported 00:16:35.321 Namespace Granularity: Not Supported 00:16:35.321 SQ Associations: Not Supported 00:16:35.321 UUID List: Not Supported 00:16:35.321 Multi-Domain Subsystem: Not Supported 00:16:35.321 Fixed Capacity Management: Not Supported 00:16:35.321 Variable Capacity Management: Not Supported 00:16:35.321 Delete Endurance Group: Not Supported 00:16:35.321 Delete NVM Set: Not Supported 00:16:35.321 Extended LBA Formats Supported: Not Supported 00:16:35.321 Flexible Data Placement Supported: Not Supported 00:16:35.321 00:16:35.321 Controller Memory Buffer Support 00:16:35.321 ================================ 00:16:35.321 Supported: No 00:16:35.321 00:16:35.321 Persistent Memory Region Support 00:16:35.321 ================================ 00:16:35.321 Supported: No 00:16:35.321 00:16:35.321 Admin Command Set Attributes 00:16:35.321 ============================ 00:16:35.321 Security Send/Receive: Not Supported 00:16:35.321 Format NVM: Supported 00:16:35.321 Firmware Activate/Download: Supported 00:16:35.321 Namespace Management: Supported 00:16:35.321 Device Self-Test: Not Supported 00:16:35.321 Directives: Not Supported 00:16:35.321 NVMe-MI: Not Supported 00:16:35.321 Virtualization Management: Not Supported 00:16:35.321 Doorbell Buffer Config: Not Supported 00:16:35.321 Get LBA Status Capability: Not Supported 00:16:35.321 Command & Feature Lockdown Capability: Not Supported 00:16:35.321 Abort Command Limit: 4 00:16:35.321 Async Event Request Limit: 4 00:16:35.321 Number of Firmware Slots: 4 00:16:35.321 Firmware Slot 1 Read-Only: No 00:16:35.321 Firmware Activation Without Reset: Yes 00:16:35.321 Multiple Update Detection Support: No 00:16:35.321 Firmware Update Granularity: No Information Provided 00:16:35.321 Per-Namespace SMART Log: No 00:16:35.321 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.321 Subsystem NQN: 00:16:35.321 Command Effects Log Page: Supported 00:16:35.321 Get Log Page Extended Data: Supported 00:16:35.321 Telemetry Log Pages: Supported 00:16:35.321 Persistent Event Log Pages: Not Supported 00:16:35.321 Supported Log Pages Log Page: May Support 00:16:35.321 Commands Supported & Effects Log Page: Not Supported 00:16:35.321 Feature Identifiers & Effects Log Page:May Support 00:16:35.321 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.321 Data Area 4 for Telemetry Log: Not Supported 00:16:35.321 Error Log Page Entries Supported: 64 00:16:35.321 Keep Alive: Not Supported 00:16:35.321 00:16:35.322 NVM Command Set Attributes 00:16:35.322 ========================== 00:16:35.322 Submission Queue Entry Size 00:16:35.322 Max: 64 00:16:35.322 Min: 64 00:16:35.322 Completion Queue Entry Size 00:16:35.322 Max: 16 00:16:35.322 Min: 16 00:16:35.322 Number of Namespaces: 128 00:16:35.322 Compare Command: Not Supported 00:16:35.322 Write Uncorrectable Command: Supported 00:16:35.322 Dataset Management Command: Supported 00:16:35.322 Write Zeroes Command: Not Supported 00:16:35.322 Set Features Save Field: Not Supported 00:16:35.322 Reservations: Not Supported 00:16:35.322 Timestamp: Not Supported 00:16:35.322 Copy: Not Supported 00:16:35.322 Volatile Write Cache: Not Present 00:16:35.322 Atomic Write Unit (Normal): 1 00:16:35.322 Atomic Write Unit (PFail): 1 00:16:35.322 Atomic Compare & Write Unit: 1 00:16:35.322 Fused Compare & Write: Not Supported 00:16:35.322 Scatter-Gather List 00:16:35.322 SGL Command Set: Not Supported 00:16:35.322 SGL Keyed: Not Supported 00:16:35.322 SGL Bit Bucket Descriptor: Not Supported 00:16:35.322 SGL Metadata Pointer: Not Supported 00:16:35.322 Oversized SGL: Not Supported 00:16:35.322 SGL Metadata Address: Not Supported 00:16:35.322 SGL Offset: Not Supported 00:16:35.322 Transport SGL Data Block: Not Supported 00:16:35.322 Replay Protected Memory Block: Not Supported 00:16:35.322 00:16:35.322 Firmware Slot Information 00:16:35.322 ========================= 00:16:35.322 Active slot: 1 00:16:35.322 Slot 1 Firmware Revision: VDV1Y295 00:16:35.322 00:16:35.322 00:16:35.322 Commands Supported and Effects 00:16:35.322 ============================== 00:16:35.322 Admin Commands 00:16:35.322 -------------- 00:16:35.322 Delete I/O Submission Queue (00h): Supported 00:16:35.322 Create I/O Submission Queue (01h): Supported All-NS-Exclusive 00:16:35.322 Get Log Page (02h): Supported 00:16:35.322 Delete I/O Completion Queue (04h): Supported 00:16:35.322 Create I/O Completion Queue (05h): Supported All-NS-Exclusive 00:16:35.322 Identify (06h): Supported 00:16:35.322 Abort (08h): Supported 00:16:35.322 Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 00:16:35.322 Get Features (0Ah): Supported 00:16:35.322 Asynchronous Event Request (0Ch): Supported 00:16:35.322 Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive 00:16:35.322 Firmware Commit (10h): Supported Ctrlr-Cap-Change 00:16:35.322 Firmware Image Download (11h): Supported 00:16:35.322 Namespace Attachment (15h): Supported Per-NS-Exclusive 00:16:35.322 Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive 00:16:35.322 Vendor specific (C8h): Supported 00:16:35.322 Vendor specific (D2h): Supported 00:16:35.322 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:35.322 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:35.322 I/O Commands 00:16:35.322 ------------ 00:16:35.322 Flush (00h): Supported LBA-Change 00:16:35.322 Write (01h): Supported LBA-Change 00:16:35.322 Read (02h): Supported 00:16:35.322 Write Uncorrectable (04h): Supported LBA-Change 00:16:35.322 Dataset Management (09h): Supported LBA-Change 00:16:35.322 00:16:35.322 Error Log 00:16:35.322 ========= 00:16:35.322 Entry: 0 00:16:35.322 Error Count: 0x1f02 00:16:35.322 Submission Queue Id: 0x2 00:16:35.322 Command Id: 0xffff 00:16:35.322 Phase Bit: 0 00:16:35.322 Status Code: 0x6 00:16:35.322 Status Code Type: 0x0 00:16:35.322 Do Not Retry: 1 00:16:35.322 Error Location: 0xffff 00:16:35.322 LBA: 0x0 00:16:35.322 Namespace: 0xffffffff 00:16:35.322 Vendor Log Page: 0x0 00:16:35.322 ----------- 00:16:35.322 Entry: 1 00:16:35.322 Error Count: 0x1f01 00:16:35.322 Submission Queue Id: 0x2 00:16:35.322 Command Id: 0xffff 00:16:35.322 Phase Bit: 0 00:16:35.322 Status Code: 0x6 00:16:35.322 Status Code Type: 0x0 00:16:35.322 Do Not Retry: 1 00:16:35.322 Error Location: 0xffff 00:16:35.322 LBA: 0x0 00:16:35.322 Namespace: 0xffffffff 00:16:35.322 Vendor Log Page: 0x0 00:16:35.322 ----------- 00:16:35.322 Entry: 2 00:16:35.322 Error Count: 0x1f00 00:16:35.322 Submission Queue Id: 0x0 00:16:35.322 Command Id: 0xffff 00:16:35.322 Phase Bit: 0 00:16:35.322 Status Code: 0x6 00:16:35.322 Status Code Type: 0x0 00:16:35.322 Do Not Retry: 1 00:16:35.322 Error Location: 0xffff 00:16:35.322 LBA: 0x0 00:16:35.322 Namespace: 0xffffffff 00:16:35.322 Vendor Log Page: 0x0 00:16:35.322 ----------- 00:16:35.322 Entry: 3 00:16:35.322 Error Count: 0x1eff 00:16:35.322 Submission Queue Id: 0x2 00:16:35.322 Command Id: 0xffff 00:16:35.322 Phase Bit: 0 00:16:35.322 Status Code: 0x6 00:16:35.322 Status Code Type: 0x0 00:16:35.322 Do Not Retry: 1 00:16:35.322 Error Location: 0xffff 00:16:35.322 LBA: 0x0 00:16:35.322 Namespace: 0xffffffff 00:16:35.322 Vendor Log Page: 0x0 00:16:35.322 ----------- 00:16:35.323 Entry: 4 00:16:35.323 Error Count: 0x1efe 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 5 00:16:35.323 Error Count: 0x1efd 00:16:35.323 Submission Queue Id: 0x0 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 6 00:16:35.323 Error Count: 0x1efc 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 7 00:16:35.323 Error Count: 0x1efb 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 8 00:16:35.323 Error Count: 0x1efa 00:16:35.323 Submission Queue Id: 0x0 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 9 00:16:35.323 Error Count: 0x1ef9 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 10 00:16:35.323 Error Count: 0x1ef8 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 11 00:16:35.323 Error Count: 0x1ef7 00:16:35.323 Submission Queue Id: 0x0 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 12 00:16:35.323 Error Count: 0x1ef6 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 13 00:16:35.323 Error Count: 0x1ef5 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 14 00:16:35.323 Error Count: 0x1ef4 00:16:35.323 Submission Queue Id: 0x0 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 15 00:16:35.323 Error Count: 0x1ef3 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 16 00:16:35.323 Error Count: 0x1ef2 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 17 00:16:35.323 Error Count: 0x1ef1 00:16:35.323 Submission Queue Id: 0x0 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.323 Status Code Type: 0x0 00:16:35.323 Do Not Retry: 1 00:16:35.323 Error Location: 0xffff 00:16:35.323 LBA: 0x0 00:16:35.323 Namespace: 0xffffffff 00:16:35.323 Vendor Log Page: 0x0 00:16:35.323 ----------- 00:16:35.323 Entry: 18 00:16:35.323 Error Count: 0x1ef0 00:16:35.323 Submission Queue Id: 0x2 00:16:35.323 Command Id: 0xffff 00:16:35.323 Phase Bit: 0 00:16:35.323 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 19 00:16:35.324 Error Count: 0x1eef 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 20 00:16:35.324 Error Count: 0x1eee 00:16:35.324 Submission Queue Id: 0x0 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 21 00:16:35.324 Error Count: 0x1eed 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 22 00:16:35.324 Error Count: 0x1eec 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 23 00:16:35.324 Error Count: 0x1eeb 00:16:35.324 Submission Queue Id: 0x0 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 24 00:16:35.324 Error Count: 0x1eea 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 25 00:16:35.324 Error Count: 0x1ee9 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 26 00:16:35.324 Error Count: 0x1ee8 00:16:35.324 Submission Queue Id: 0x0 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 27 00:16:35.324 Error Count: 0x1ee7 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 28 00:16:35.324 Error Count: 0x1ee6 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 29 00:16:35.324 Error Count: 0x1ee5 00:16:35.324 Submission Queue Id: 0x0 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 30 00:16:35.324 Error Count: 0x1ee4 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.324 Namespace: 0xffffffff 00:16:35.324 Vendor Log Page: 0x0 00:16:35.324 ----------- 00:16:35.324 Entry: 31 00:16:35.324 Error Count: 0x1ee3 00:16:35.324 Submission Queue Id: 0x2 00:16:35.324 Command Id: 0xffff 00:16:35.324 Phase Bit: 0 00:16:35.324 Status Code: 0x6 00:16:35.324 Status Code Type: 0x0 00:16:35.324 Do Not Retry: 1 00:16:35.324 Error Location: 0xffff 00:16:35.324 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 32 00:16:35.325 Error Count: 0x1ee2 00:16:35.325 Submission Queue Id: 0x0 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 33 00:16:35.325 Error Count: 0x1ee1 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 34 00:16:35.325 Error Count: 0x1ee0 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 35 00:16:35.325 Error Count: 0x1edf 00:16:35.325 Submission Queue Id: 0x0 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 36 00:16:35.325 Error Count: 0x1ede 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 37 00:16:35.325 Error Count: 0x1edd 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 38 00:16:35.325 Error Count: 0x1edc 00:16:35.325 Submission Queue Id: 0x0 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 39 00:16:35.325 Error Count: 0x1edb 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 40 00:16:35.325 Error Count: 0x1eda 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 41 00:16:35.325 Error Count: 0x1ed9 00:16:35.325 Submission Queue Id: 0x0 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 42 00:16:35.325 Error Count: 0x1ed8 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 43 00:16:35.325 Error Count: 0x1ed7 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 44 00:16:35.325 Error Count: 0x1ed6 00:16:35.325 Submission Queue Id: 0x0 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.325 Do Not Retry: 1 00:16:35.325 Error Location: 0xffff 00:16:35.325 LBA: 0x0 00:16:35.325 Namespace: 0xffffffff 00:16:35.325 Vendor Log Page: 0x0 00:16:35.325 ----------- 00:16:35.325 Entry: 45 00:16:35.325 Error Count: 0x1ed5 00:16:35.325 Submission Queue Id: 0x2 00:16:35.325 Command Id: 0xffff 00:16:35.325 Phase Bit: 0 00:16:35.325 Status Code: 0x6 00:16:35.325 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 46 00:16:35.326 Error Count: 0x1ed4 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 47 00:16:35.326 Error Count: 0x1ed3 00:16:35.326 Submission Queue Id: 0x0 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 48 00:16:35.326 Error Count: 0x1ed2 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 49 00:16:35.326 Error Count: 0x1ed1 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 50 00:16:35.326 Error Count: 0x1ed0 00:16:35.326 Submission Queue Id: 0x0 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 51 00:16:35.326 Error Count: 0x1ecf 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 52 00:16:35.326 Error Count: 0x1ece 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 53 00:16:35.326 Error Count: 0x1ecd 00:16:35.326 Submission Queue Id: 0x0 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 54 00:16:35.326 Error Count: 0x1ecc 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 55 00:16:35.326 Error Count: 0x1ecb 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 56 00:16:35.326 Error Count: 0x1eca 00:16:35.326 Submission Queue Id: 0x0 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 57 00:16:35.326 Error Count: 0x1ec9 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 58 00:16:35.326 Error Count: 0x1ec8 00:16:35.326 Submission Queue Id: 0x2 00:16:35.326 Command Id: 0xffff 00:16:35.326 Phase Bit: 0 00:16:35.326 Status Code: 0x6 00:16:35.326 Status Code Type: 0x0 00:16:35.326 Do Not Retry: 1 00:16:35.326 Error Location: 0xffff 00:16:35.326 LBA: 0x0 00:16:35.326 Namespace: 0xffffffff 00:16:35.326 Vendor Log Page: 0x0 00:16:35.326 ----------- 00:16:35.326 Entry: 59 00:16:35.326 Error Count: 0x1ec7 00:16:35.327 Submission Queue Id: 0x0 00:16:35.327 Command Id: 0xffff 00:16:35.327 Phase Bit: 0 00:16:35.327 Status Code: 0x6 00:16:35.327 Status Code Type: 0x0 00:16:35.327 Do Not Retry: 1 00:16:35.327 Error Location: 0xffff 00:16:35.327 LBA: 0x0 00:16:35.327 Namespace: 0xffffffff 00:16:35.327 Vendor Log Page: 0x0 00:16:35.327 ----------- 00:16:35.327 Entry: 60 00:16:35.327 Error Count: 0x1ec6 00:16:35.327 Submission Queue Id: 0x2 00:16:35.327 Command Id: 0xffff 00:16:35.327 Phase Bit: 0 00:16:35.327 Status Code: 0x6 00:16:35.327 Status Code Type: 0x0 00:16:35.327 Do Not Retry: 1 00:16:35.327 Error Location: 0xffff 00:16:35.327 LBA: 0x0 00:16:35.327 Namespace: 0xffffffff 00:16:35.327 Vendor Log Page: 0x0 00:16:35.327 ----------- 00:16:35.327 Entry: 61 00:16:35.327 Error Count: 0x1ec5 00:16:35.327 Submission Queue Id: 0x2 00:16:35.327 Command Id: 0xffff 00:16:35.327 Phase Bit: 0 00:16:35.327 Status Code: 0x6 00:16:35.327 Status Code Type: 0x0 00:16:35.327 Do Not Retry: 1 00:16:35.327 Error Location: 0xffff 00:16:35.327 LBA: 0x0 00:16:35.327 Namespace: 0xffffffff 00:16:35.327 Vendor Log Page: 0x0 00:16:35.327 ----------- 00:16:35.327 Entry: 62 00:16:35.327 Error Count: 0x1ec4 00:16:35.327 Submission Queue Id: 0x0 00:16:35.327 Command Id: 0xffff 00:16:35.327 Phase Bit: 0 00:16:35.327 Status Code: 0x6 00:16:35.327 Status Code Type: 0x0 00:16:35.327 Do Not Retry: 1 00:16:35.327 Error Location: 0xffff 00:16:35.327 LBA: 0x0 00:16:35.327 Namespace: 0xffffffff 00:16:35.327 Vendor Log Page: 0x0 00:16:35.327 ----------- 00:16:35.327 Entry: 63 00:16:35.327 Error Count: 0x1ec3 00:16:35.327 Submission Queue Id: 0x2 00:16:35.327 Command Id: 0xffff 00:16:35.327 Phase Bit: 0 00:16:35.327 Status Code: 0x6 00:16:35.327 Status Code Type: 0x0 00:16:35.327 Do Not Retry: 1 00:16:35.327 Error Location: 0xffff 00:16:35.327 LBA: 0x0 00:16:35.327 Namespace: 0xffffffff 00:16:35.327 Vendor Log Page: 0x0 00:16:35.327 00:16:35.327 Arbitration 00:16:35.327 =========== 00:16:35.327 Arbitration Burst: 1 00:16:35.327 Low Priority Weight: 1 00:16:35.327 Medium Priority Weight: 1 00:16:35.327 High Priority Weight: 1 00:16:35.327 00:16:35.327 Power Management 00:16:35.327 ================ 00:16:35.327 Number of Power States: 1 00:16:35.327 Current Power State: Power State #0 00:16:35.327 Power State #0: 00:16:35.327 Max Power: 20.00 W 00:16:35.327 Non-Operational State: Operational 00:16:35.327 Entry Latency: Not Reported 00:16:35.327 Exit Latency: Not Reported 00:16:35.327 Relative Read Throughput: 0 00:16:35.327 Relative Read Latency: 0 00:16:35.327 Relative Write Throughput: 0 00:16:35.327 Relative Write Latency: 0 00:16:35.327 Idle Power: Not Reported 00:16:35.327 Active Power: Not Reported 00:16:35.327 Non-Operational Permissive Mode: Not Supported 00:16:35.327 00:16:35.327 Health Information 00:16:35.327 ================== 00:16:35.327 Critical Warnings: 00:16:35.327 Available Spare Space: OK 00:16:35.327 Temperature: OK 00:16:35.327 Device Reliability: OK 00:16:35.327 Read Only: No 00:16:35.327 Volatile Memory Backup: OK 00:16:35.327 Current Temperature: 308 Kelvin (35 Celsius) 00:16:35.327 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:35.327 Available Spare: 100% 00:16:35.327 Available Spare Threshold: 10% 00:16:35.327 Life Percentage Used: 6% 00:16:35.327 Data Units Read: 103439399 00:16:35.327 Data Units Written: 227621947 00:16:35.327 Host Read Commands: 6940983620 00:16:35.327 Host Write Commands: 8109420884 00:16:35.327 Controller Busy Time: 604 minutes 00:16:35.327 Power Cycles: 97 00:16:35.327 Power On Hours: 39056 hours 00:16:35.327 Unsafe Shutdowns: 77 00:16:35.327 Unrecoverable Media Errors: 0 00:16:35.327 Lifetime Error Log Entries: 7938 00:16:35.327 Warning Temperature Time: 474 minutes 00:16:35.327 Critical Temperature Time: 0 minutes 00:16:35.327 00:16:35.327 Number of Queues 00:16:35.327 ================ 00:16:35.327 Number of I/O Submission Queues: 128 00:16:35.327 Number of I/O Completion Queues: 128 00:16:35.327 00:16:35.327 Intel Health Information 00:16:35.327 ================== 00:16:35.327 Program Fail Count: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 0 00:16:35.327 Erase Fail Count: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 0 00:16:35.327 Wear Leveling Count: 00:16:35.327 Normalized Value : 94 00:16:35.327 Current Raw Value: 00:16:35.327 Min: 91 00:16:35.327 Max: 321 00:16:35.327 Avg: 301 00:16:35.327 End to End Error Detection Count: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 0 00:16:35.327 CRC Error Count: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 0 00:16:35.327 Timed Workload, Media Wear: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 65535 00:16:35.327 Timed Workload, Host Read/Write Ratio: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 65535% 00:16:35.327 Timed Workload, Timer: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 65535 00:16:35.327 Thermal Throttle Status: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 00:16:35.327 Percentage: 0% 00:16:35.327 Throttling Event Count: 0 00:16:35.327 Retry Buffer Overflow Counter: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 0 00:16:35.327 PLL Lock Loss Count: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 0 00:16:35.327 NAND Bytes Written: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 21353997 00:16:35.327 Host Bytes Written: 00:16:35.327 Normalized Value : 100 00:16:35.327 Current Raw Value: 3473235 00:16:35.327 00:16:35.327 Intel Temperature Information 00:16:35.327 ================== 00:16:35.328 Current Temperature: 35 00:16:35.328 Overtemp shutdown Flag for last critical component temperature: 0 00:16:35.328 Overtemp shutdown Flag for life critical component temperature: 0 00:16:35.328 Highest temperature: 63 00:16:35.328 Lowest temperature: 18 00:16:35.328 Specified Maximum Operating Temperature: 70 00:16:35.328 Specified Minimum Operating Temperature: 0 00:16:35.328 Estimated offset: 0 00:16:35.328 00:16:35.328 00:16:35.328 Intel Marketing Information 00:16:35.328 ================== 00:16:35.328 Marketing Product Information: Intel(R) SSD DC P4510 Series 00:16:35.328 00:16:35.328 00:16:35.328 Active Namespaces 00:16:35.328 ================= 00:16:35.328 Namespace ID:1 00:16:35.328 Error Recovery Timeout: Unlimited 00:16:35.328 Command Set Identifier: NVM (00h) 00:16:35.328 Deallocate: Supported 00:16:35.328 Deallocated/Unwritten Error: Not Supported 00:16:35.328 Deallocated Read Value: All 0x00 00:16:35.328 Deallocate in Write Zeroes: Not Supported 00:16:35.328 Deallocated Guard Field: 0xFFFF 00:16:35.328 Flush: Not Supported 00:16:35.328 Reservation: Not Supported 00:16:35.328 Namespace Sharing Capabilities: Private 00:16:35.328 Size (in LBAs): 7814037168 (3726GiB) 00:16:35.328 Capacity (in LBAs): 7814037168 (3726GiB) 00:16:35.328 Utilization (in LBAs): 7814037168 (3726GiB) 00:16:35.328 NGUID: 01000000D91400000000000000000000 00:16:35.328 EUI64: 000000000000D914 00:16:35.328 Thin Provisioning: Not Supported 00:16:35.328 Per-NS Atomic Units: No 00:16:35.328 NGUID/EUI64 Never Reused: No 00:16:35.328 Namespace Write Protected: No 00:16:35.328 Number of LBA Formats: 2 00:16:35.328 Current LBA Format: LBA Format #00 00:16:35.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.328 LBA Format #01: Data Size: 4096 Metadata Size: 0 00:16:35.328 00:16:35.328 00:16:35.328 real 0m0.788s 00:16:35.328 user 0m0.253s 00:16:35.328 sys 0m0.435s 00:16:35.328 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.328 13:48:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:16:35.328 ************************************ 00:16:35.328 END TEST nvme_identify 00:16:35.328 ************************************ 00:16:35.585 13:48:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:16:35.585 13:48:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.585 13:48:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.585 13:48:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:35.586 ************************************ 00:16:35.586 START TEST nvme_perf 00:16:35.586 ************************************ 00:16:35.586 13:48:06 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:16:35.586 13:48:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:16:36.960 Initializing NVMe Controllers 00:16:36.960 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:16:36.960 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:16:36.960 Initialization complete. Launching workers. 00:16:36.960 ======================================================== 00:16:36.960 Latency(us) 00:16:36.960 Device Information : IOPS MiB/s Average min max 00:16:36.960 PCIE (0000:d8:00.0) NSID 1 from core 0: 101970.00 1194.96 1254.70 73.85 3334.34 00:16:36.960 ======================================================== 00:16:36.960 Total : 101970.00 1194.96 1254.70 73.85 3334.34 00:16:36.960 00:16:36.960 Summary latency data for PCIE (0000:d8:00.0) NSID 1 from core 0: 00:16:36.960 ================================================================================= 00:16:36.960 1.00000% : 225.280us 00:16:36.960 10.00000% : 552.070us 00:16:36.960 25.00000% : 833.447us 00:16:36.960 50.00000% : 1246.609us 00:16:36.960 75.00000% : 1659.770us 00:16:36.960 90.00000% : 1966.080us 00:16:36.960 95.00000% : 2137.043us 00:16:36.960 98.00000% : 2322.254us 00:16:36.960 99.00000% : 2450.477us 00:16:36.960 99.50000% : 2564.452us 00:16:36.960 99.90000% : 2792.403us 00:16:36.960 99.99000% : 3063.096us 00:16:36.960 99.99900% : 3305.294us 00:16:36.960 99.99990% : 3348.035us 00:16:36.960 99.99999% : 3348.035us 00:16:36.960 00:16:36.960 Latency histogram for PCIE (0000:d8:00.0) NSID 1 from core 0: 00:16:36.960 ============================================================================== 00:16:36.960 Range in us Cumulative IO count 00:16:36.960 73.461 - 73.906: 0.0010% ( 1) 00:16:36.960 75.242 - 75.687: 0.0020% ( 1) 00:16:36.960 77.913 - 78.358: 0.0029% ( 1) 00:16:36.960 80.584 - 81.030: 0.0039% ( 1) 00:16:36.960 82.810 - 83.256: 0.0049% ( 1) 00:16:36.960 84.591 - 85.037: 0.0069% ( 2) 00:16:36.961 85.037 - 85.482: 0.0078% ( 1) 00:16:36.961 86.817 - 87.263: 0.0088% ( 1) 00:16:36.961 87.263 - 87.708: 0.0098% ( 1) 00:16:36.961 88.598 - 89.043: 0.0108% ( 1) 00:16:36.961 89.043 - 89.489: 0.0118% ( 1) 00:16:36.961 89.489 - 89.934: 0.0127% ( 1) 00:16:36.961 89.934 - 90.379: 0.0137% ( 1) 00:16:36.961 92.605 - 93.050: 0.0147% ( 1) 00:16:36.961 93.050 - 93.496: 0.0157% ( 1) 00:16:36.961 95.277 - 95.722: 0.0177% ( 2) 00:16:36.961 95.722 - 96.167: 0.0186% ( 1) 00:16:36.961 96.167 - 96.612: 0.0196% ( 1) 00:16:36.961 97.057 - 97.503: 0.0206% ( 1) 00:16:36.961 98.393 - 98.838: 0.0226% ( 2) 00:16:36.961 100.174 - 100.619: 0.0235% ( 1) 00:16:36.961 100.619 - 101.064: 0.0245% ( 1) 00:16:36.961 101.510 - 101.955: 0.0255% ( 1) 00:16:36.961 101.955 - 102.400: 0.0265% ( 1) 00:16:36.961 102.400 - 102.845: 0.0275% ( 1) 00:16:36.961 102.845 - 103.290: 0.0284% ( 1) 00:16:36.961 103.290 - 103.736: 0.0294% ( 1) 00:16:36.961 104.181 - 104.626: 0.0304% ( 1) 00:16:36.961 105.071 - 105.517: 0.0324% ( 2) 00:16:36.961 105.517 - 105.962: 0.0333% ( 1) 00:16:36.961 107.297 - 107.743: 0.0343% ( 1) 00:16:36.961 108.633 - 109.078: 0.0363% ( 2) 00:16:36.961 109.078 - 109.523: 0.0382% ( 2) 00:16:36.961 109.523 - 109.969: 0.0392% ( 1) 00:16:36.961 111.750 - 112.195: 0.0402% ( 1) 00:16:36.961 112.195 - 112.640: 0.0412% ( 1) 00:16:36.961 112.640 - 113.085: 0.0441% ( 3) 00:16:36.961 113.085 - 113.530: 0.0461% ( 2) 00:16:36.961 113.530 - 113.976: 0.0471% ( 1) 00:16:36.961 113.976 - 114.866: 0.0481% ( 1) 00:16:36.961 114.866 - 115.757: 0.0490% ( 1) 00:16:36.961 115.757 - 116.647: 0.0530% ( 4) 00:16:36.961 116.647 - 117.537: 0.0569% ( 4) 00:16:36.961 117.537 - 118.428: 0.0637% ( 7) 00:16:36.961 118.428 - 119.318: 0.0667% ( 3) 00:16:36.961 119.318 - 120.209: 0.0716% ( 5) 00:16:36.961 120.209 - 121.099: 0.0755% ( 4) 00:16:36.961 121.099 - 121.990: 0.0804% ( 5) 00:16:36.961 121.990 - 122.880: 0.0834% ( 3) 00:16:36.961 122.880 - 123.770: 0.0853% ( 2) 00:16:36.961 123.770 - 124.661: 0.0941% ( 9) 00:16:36.961 124.661 - 125.551: 0.0951% ( 1) 00:16:36.961 125.551 - 126.442: 0.1069% ( 12) 00:16:36.961 126.442 - 127.332: 0.1089% ( 2) 00:16:36.961 127.332 - 128.223: 0.1108% ( 2) 00:16:36.961 128.223 - 129.113: 0.1118% ( 1) 00:16:36.961 129.113 - 130.003: 0.1157% ( 4) 00:16:36.961 130.003 - 130.894: 0.1167% ( 1) 00:16:36.961 130.894 - 131.784: 0.1196% ( 3) 00:16:36.961 131.784 - 132.675: 0.1265% ( 7) 00:16:36.961 132.675 - 133.565: 0.1344% ( 8) 00:16:36.961 133.565 - 134.456: 0.1402% ( 6) 00:16:36.961 134.456 - 135.346: 0.1471% ( 7) 00:16:36.961 135.346 - 136.237: 0.1530% ( 6) 00:16:36.961 136.237 - 137.127: 0.1618% ( 9) 00:16:36.961 137.127 - 138.017: 0.1648% ( 3) 00:16:36.961 138.017 - 138.908: 0.1706% ( 6) 00:16:36.961 138.908 - 139.798: 0.1785% ( 8) 00:16:36.961 139.798 - 140.689: 0.1814% ( 3) 00:16:36.961 140.689 - 141.579: 0.1903% ( 9) 00:16:36.961 141.579 - 142.470: 0.2001% ( 10) 00:16:36.961 142.470 - 143.360: 0.2059% ( 6) 00:16:36.961 143.360 - 144.250: 0.2118% ( 6) 00:16:36.961 144.250 - 145.141: 0.2177% ( 6) 00:16:36.961 145.141 - 146.031: 0.2246% ( 7) 00:16:36.961 146.031 - 146.922: 0.2295% ( 5) 00:16:36.961 146.922 - 147.812: 0.2363% ( 7) 00:16:36.961 147.812 - 148.703: 0.2403% ( 4) 00:16:36.961 148.703 - 149.593: 0.2471% ( 7) 00:16:36.961 149.593 - 150.483: 0.2501% ( 3) 00:16:36.961 150.483 - 151.374: 0.2540% ( 4) 00:16:36.961 151.374 - 152.264: 0.2609% ( 7) 00:16:36.961 152.264 - 153.155: 0.2648% ( 4) 00:16:36.961 153.155 - 154.045: 0.2746% ( 10) 00:16:36.961 154.045 - 154.936: 0.2844% ( 10) 00:16:36.961 154.936 - 155.826: 0.2913% ( 7) 00:16:36.961 155.826 - 156.717: 0.2971% ( 6) 00:16:36.961 156.717 - 157.607: 0.3079% ( 11) 00:16:36.961 157.607 - 158.497: 0.3148% ( 7) 00:16:36.961 158.497 - 159.388: 0.3177% ( 3) 00:16:36.961 159.388 - 160.278: 0.3266% ( 9) 00:16:36.961 160.278 - 161.169: 0.3354% ( 9) 00:16:36.961 161.169 - 162.059: 0.3393% ( 4) 00:16:36.961 162.059 - 162.950: 0.3481% ( 9) 00:16:36.961 162.950 - 163.840: 0.3540% ( 6) 00:16:36.961 163.840 - 164.730: 0.3609% ( 7) 00:16:36.961 164.730 - 165.621: 0.3678% ( 7) 00:16:36.961 165.621 - 166.511: 0.3697% ( 2) 00:16:36.961 166.511 - 167.402: 0.3795% ( 10) 00:16:36.961 167.402 - 168.292: 0.3893% ( 10) 00:16:36.961 168.292 - 169.183: 0.3982% ( 9) 00:16:36.961 169.183 - 170.073: 0.4031% ( 5) 00:16:36.961 170.073 - 170.963: 0.4089% ( 6) 00:16:36.961 170.963 - 171.854: 0.4168% ( 8) 00:16:36.961 171.854 - 172.744: 0.4217% ( 5) 00:16:36.961 172.744 - 173.635: 0.4335% ( 12) 00:16:36.961 173.635 - 174.525: 0.4384% ( 5) 00:16:36.961 174.525 - 175.416: 0.4462% ( 8) 00:16:36.961 175.416 - 176.306: 0.4541% ( 8) 00:16:36.961 176.306 - 177.197: 0.4609% ( 7) 00:16:36.961 177.197 - 178.087: 0.4688% ( 8) 00:16:36.961 178.087 - 178.977: 0.4746% ( 6) 00:16:36.961 178.977 - 179.868: 0.4805% ( 6) 00:16:36.961 179.868 - 180.758: 0.4943% ( 14) 00:16:36.961 180.758 - 181.649: 0.5011% ( 7) 00:16:36.961 181.649 - 182.539: 0.5080% ( 7) 00:16:36.961 182.539 - 183.430: 0.5139% ( 6) 00:16:36.961 183.430 - 184.320: 0.5237% ( 10) 00:16:36.961 184.320 - 185.210: 0.5315% ( 8) 00:16:36.961 185.210 - 186.101: 0.5433% ( 12) 00:16:36.961 186.101 - 186.991: 0.5590% ( 16) 00:16:36.961 186.991 - 187.882: 0.5600% ( 1) 00:16:36.961 187.882 - 188.772: 0.5688% ( 9) 00:16:36.961 188.772 - 189.663: 0.5835% ( 15) 00:16:36.961 189.663 - 190.553: 0.5923% ( 9) 00:16:36.961 190.553 - 191.443: 0.6051% ( 13) 00:16:36.961 191.443 - 192.334: 0.6129% ( 8) 00:16:36.961 192.334 - 193.224: 0.6267% ( 14) 00:16:36.961 193.224 - 194.115: 0.6325% ( 6) 00:16:36.961 194.115 - 195.005: 0.6365% ( 4) 00:16:36.961 195.005 - 195.896: 0.6453% ( 9) 00:16:36.961 195.896 - 196.786: 0.6502% ( 5) 00:16:36.961 196.786 - 197.677: 0.6551% ( 5) 00:16:36.961 197.677 - 198.567: 0.6659% ( 11) 00:16:36.961 198.567 - 199.457: 0.6767% ( 11) 00:16:36.961 199.457 - 200.348: 0.6855% ( 9) 00:16:36.961 200.348 - 201.238: 0.6982% ( 13) 00:16:36.961 201.238 - 202.129: 0.7090% ( 11) 00:16:36.961 202.129 - 203.019: 0.7286% ( 20) 00:16:36.961 203.019 - 203.910: 0.7365% ( 8) 00:16:36.961 203.910 - 204.800: 0.7434% ( 7) 00:16:36.961 204.800 - 205.690: 0.7581% ( 15) 00:16:36.961 205.690 - 206.581: 0.7659% ( 8) 00:16:36.961 206.581 - 207.471: 0.7806% ( 15) 00:16:36.961 207.471 - 208.362: 0.7914% ( 11) 00:16:36.961 208.362 - 209.252: 0.8051% ( 14) 00:16:36.961 209.252 - 210.143: 0.8071% ( 2) 00:16:36.961 210.143 - 211.033: 0.8179% ( 11) 00:16:36.961 211.033 - 211.923: 0.8336% ( 16) 00:16:36.961 211.923 - 212.814: 0.8453% ( 12) 00:16:36.961 212.814 - 213.704: 0.8571% ( 12) 00:16:36.961 213.704 - 214.595: 0.8669% ( 10) 00:16:36.961 214.595 - 215.485: 0.8787% ( 12) 00:16:36.961 215.485 - 216.376: 0.8934% ( 15) 00:16:36.961 216.376 - 217.266: 0.9061% ( 13) 00:16:36.961 217.266 - 218.157: 0.9140% ( 8) 00:16:36.961 218.157 - 219.047: 0.9258% ( 12) 00:16:36.961 219.047 - 219.937: 0.9356% ( 10) 00:16:36.961 219.937 - 220.828: 0.9483% ( 13) 00:16:36.961 220.828 - 221.718: 0.9670% ( 19) 00:16:36.961 221.718 - 222.609: 0.9768% ( 10) 00:16:36.961 222.609 - 223.499: 0.9875% ( 11) 00:16:36.961 223.499 - 224.390: 0.9954% ( 8) 00:16:36.961 224.390 - 225.280: 1.0052% ( 10) 00:16:36.961 225.280 - 226.170: 1.0189% ( 14) 00:16:36.961 226.170 - 227.061: 1.0228% ( 4) 00:16:36.961 227.061 - 227.951: 1.0346% ( 12) 00:16:36.961 227.951 - 229.732: 1.0601% ( 26) 00:16:36.961 229.732 - 231.513: 1.0748% ( 15) 00:16:36.961 231.513 - 233.294: 1.0974% ( 23) 00:16:36.961 233.294 - 235.075: 1.1209% ( 24) 00:16:36.961 235.075 - 236.856: 1.1425% ( 22) 00:16:36.961 236.856 - 238.637: 1.1572% ( 15) 00:16:36.961 238.637 - 240.417: 1.1788% ( 22) 00:16:36.961 240.417 - 242.198: 1.2043% ( 26) 00:16:36.961 242.198 - 243.979: 1.2278% ( 24) 00:16:36.961 243.979 - 245.760: 1.2484% ( 21) 00:16:36.961 245.760 - 247.541: 1.2680% ( 20) 00:16:36.961 247.541 - 249.322: 1.2984% ( 31) 00:16:36.961 249.322 - 251.103: 1.3200% ( 22) 00:16:36.961 251.103 - 252.883: 1.3465% ( 27) 00:16:36.961 252.883 - 254.664: 1.3661% ( 20) 00:16:36.961 254.664 - 256.445: 1.3935% ( 28) 00:16:36.961 256.445 - 258.226: 1.4318% ( 39) 00:16:36.961 258.226 - 260.007: 1.4504% ( 19) 00:16:36.961 260.007 - 261.788: 1.4789% ( 29) 00:16:36.961 261.788 - 263.569: 1.5063% ( 28) 00:16:36.961 263.569 - 265.350: 1.5338% ( 28) 00:16:36.962 265.350 - 267.130: 1.5603% ( 27) 00:16:36.962 267.130 - 268.911: 1.5975% ( 38) 00:16:36.962 268.911 - 270.692: 1.6309% ( 34) 00:16:36.962 270.692 - 272.473: 1.6466% ( 16) 00:16:36.962 272.473 - 274.254: 1.6770% ( 31) 00:16:36.962 274.254 - 276.035: 1.7054% ( 29) 00:16:36.962 276.035 - 277.816: 1.7358% ( 31) 00:16:36.962 277.816 - 279.597: 1.7682% ( 33) 00:16:36.962 279.597 - 281.377: 1.7995% ( 32) 00:16:36.962 281.377 - 283.158: 1.8299% ( 31) 00:16:36.962 283.158 - 284.939: 1.8554% ( 26) 00:16:36.962 284.939 - 286.720: 1.8878% ( 33) 00:16:36.962 286.720 - 288.501: 1.9143% ( 27) 00:16:36.962 288.501 - 290.282: 1.9398% ( 26) 00:16:36.962 290.282 - 292.063: 1.9633% ( 24) 00:16:36.962 292.063 - 293.843: 1.9996% ( 37) 00:16:36.962 293.843 - 295.624: 2.0369% ( 38) 00:16:36.962 295.624 - 297.405: 2.0712% ( 35) 00:16:36.962 297.405 - 299.186: 2.1016% ( 31) 00:16:36.962 299.186 - 300.967: 2.1379% ( 37) 00:16:36.962 300.967 - 302.748: 2.1722% ( 35) 00:16:36.962 302.748 - 304.529: 2.2046% ( 33) 00:16:36.962 304.529 - 306.310: 2.2409% ( 37) 00:16:36.962 306.310 - 308.090: 2.2781% ( 38) 00:16:36.962 308.090 - 309.871: 2.3056% ( 28) 00:16:36.962 309.871 - 311.652: 2.3497% ( 45) 00:16:36.962 311.652 - 313.433: 2.3880% ( 39) 00:16:36.962 313.433 - 315.214: 2.4282% ( 41) 00:16:36.962 315.214 - 316.995: 2.4635% ( 36) 00:16:36.962 316.995 - 318.776: 2.4978% ( 35) 00:16:36.962 318.776 - 320.557: 2.5439% ( 47) 00:16:36.962 320.557 - 322.337: 2.5831% ( 40) 00:16:36.962 322.337 - 324.118: 2.6204% ( 38) 00:16:36.962 324.118 - 325.899: 2.6576% ( 38) 00:16:36.962 325.899 - 327.680: 2.7037% ( 47) 00:16:36.962 327.680 - 329.461: 2.7439% ( 41) 00:16:36.962 329.461 - 331.242: 2.7959% ( 53) 00:16:36.962 331.242 - 333.023: 2.8479% ( 53) 00:16:36.962 333.023 - 334.803: 2.8832% ( 36) 00:16:36.962 334.803 - 336.584: 2.9303% ( 48) 00:16:36.962 336.584 - 338.365: 2.9724% ( 43) 00:16:36.962 338.365 - 340.146: 3.0087% ( 37) 00:16:36.962 340.146 - 341.927: 3.0470% ( 39) 00:16:36.962 341.927 - 343.708: 3.0980% ( 52) 00:16:36.962 343.708 - 345.489: 3.1450% ( 48) 00:16:36.962 345.489 - 347.270: 3.1960% ( 52) 00:16:36.962 347.270 - 349.050: 3.2372% ( 42) 00:16:36.962 349.050 - 350.831: 3.2990% ( 63) 00:16:36.962 350.831 - 352.612: 3.3441% ( 46) 00:16:36.962 352.612 - 354.393: 3.3843% ( 41) 00:16:36.962 354.393 - 356.174: 3.4245% ( 41) 00:16:36.962 356.174 - 357.955: 3.4746% ( 51) 00:16:36.962 357.955 - 359.736: 3.5197% ( 46) 00:16:36.962 359.736 - 361.517: 3.5599% ( 41) 00:16:36.962 361.517 - 363.297: 3.6020% ( 43) 00:16:36.962 363.297 - 365.078: 3.6452% ( 44) 00:16:36.962 365.078 - 366.859: 3.6981% ( 54) 00:16:36.962 366.859 - 368.640: 3.7491% ( 52) 00:16:36.962 368.640 - 370.421: 3.7884% ( 40) 00:16:36.962 370.421 - 372.202: 3.8394% ( 52) 00:16:36.962 372.202 - 373.983: 3.8855% ( 47) 00:16:36.962 373.983 - 375.763: 3.9315% ( 47) 00:16:36.962 375.763 - 377.544: 3.9796% ( 49) 00:16:36.962 377.544 - 379.325: 4.0247% ( 46) 00:16:36.962 379.325 - 381.106: 4.0688% ( 45) 00:16:36.962 381.106 - 382.887: 4.1140% ( 46) 00:16:36.962 382.887 - 384.668: 4.1679% ( 55) 00:16:36.962 384.668 - 386.449: 4.2140% ( 47) 00:16:36.962 386.449 - 388.230: 4.2611% ( 48) 00:16:36.962 388.230 - 390.010: 4.3199% ( 60) 00:16:36.962 390.010 - 391.791: 4.3778% ( 59) 00:16:36.962 391.791 - 393.572: 4.4366% ( 60) 00:16:36.962 393.572 - 395.353: 4.4886% ( 53) 00:16:36.962 395.353 - 397.134: 4.5386% ( 51) 00:16:36.962 397.134 - 398.915: 4.5945% ( 57) 00:16:36.962 398.915 - 400.696: 4.6367% ( 43) 00:16:36.962 400.696 - 402.477: 4.6896% ( 54) 00:16:36.962 402.477 - 404.257: 4.7524% ( 64) 00:16:36.962 404.257 - 406.038: 4.8044% ( 53) 00:16:36.962 406.038 - 407.819: 4.8573% ( 54) 00:16:36.962 407.819 - 409.600: 4.9073% ( 51) 00:16:36.962 409.600 - 411.381: 4.9583% ( 52) 00:16:36.962 411.381 - 413.162: 5.0172% ( 60) 00:16:36.962 413.162 - 414.943: 5.0662% ( 50) 00:16:36.962 414.943 - 416.723: 5.1142% ( 49) 00:16:36.962 416.723 - 418.504: 5.1711% ( 58) 00:16:36.962 418.504 - 420.285: 5.2251% ( 55) 00:16:36.962 420.285 - 422.066: 5.2770% ( 53) 00:16:36.962 422.066 - 423.847: 5.3359% ( 60) 00:16:36.962 423.847 - 425.628: 5.3967% ( 62) 00:16:36.962 425.628 - 427.409: 5.4565% ( 61) 00:16:36.962 427.409 - 429.190: 5.5173% ( 62) 00:16:36.962 429.190 - 430.970: 5.5771% ( 61) 00:16:36.962 430.970 - 432.751: 5.6399% ( 64) 00:16:36.962 432.751 - 434.532: 5.6899% ( 51) 00:16:36.962 434.532 - 436.313: 5.7507% ( 62) 00:16:36.962 436.313 - 438.094: 5.8046% ( 55) 00:16:36.962 438.094 - 439.875: 5.8645% ( 61) 00:16:36.962 439.875 - 441.656: 5.9302% ( 67) 00:16:36.962 441.656 - 443.437: 5.9959% ( 67) 00:16:36.962 443.437 - 445.217: 6.0635% ( 69) 00:16:36.962 445.217 - 446.998: 6.1253% ( 63) 00:16:36.962 446.998 - 448.779: 6.1989% ( 75) 00:16:36.962 448.779 - 450.560: 6.2607% ( 63) 00:16:36.962 450.560 - 452.341: 6.3156% ( 56) 00:16:36.962 452.341 - 454.122: 6.3793% ( 65) 00:16:36.962 454.122 - 455.903: 6.4460% ( 68) 00:16:36.962 455.903 - 459.464: 6.5608% ( 117) 00:16:36.962 459.464 - 463.026: 6.6775% ( 119) 00:16:36.962 463.026 - 466.588: 6.7834% ( 108) 00:16:36.962 466.588 - 470.150: 6.9118% ( 131) 00:16:36.962 470.150 - 473.711: 7.0197% ( 110) 00:16:36.962 473.711 - 477.273: 7.1374% ( 120) 00:16:36.962 477.273 - 480.835: 7.2698% ( 135) 00:16:36.962 480.835 - 484.397: 7.4032% ( 136) 00:16:36.962 484.397 - 487.958: 7.5267% ( 126) 00:16:36.962 487.958 - 491.520: 7.6758% ( 152) 00:16:36.962 491.520 - 495.082: 7.7964% ( 123) 00:16:36.962 495.082 - 498.643: 7.9259% ( 132) 00:16:36.962 498.643 - 502.205: 8.0543% ( 131) 00:16:36.962 502.205 - 505.767: 8.2083% ( 157) 00:16:36.962 505.767 - 509.329: 8.3623% ( 157) 00:16:36.962 509.329 - 512.890: 8.5143% ( 155) 00:16:36.962 512.890 - 516.452: 8.6643% ( 153) 00:16:36.962 516.452 - 520.014: 8.8134% ( 152) 00:16:36.962 520.014 - 523.576: 8.9389% ( 128) 00:16:36.962 523.576 - 527.137: 9.0968% ( 161) 00:16:36.962 527.137 - 530.699: 9.2370% ( 143) 00:16:36.962 530.699 - 534.261: 9.3949% ( 161) 00:16:36.962 534.261 - 537.823: 9.5401% ( 148) 00:16:36.962 537.823 - 541.384: 9.6803% ( 143) 00:16:36.962 541.384 - 544.946: 9.8097% ( 132) 00:16:36.962 544.946 - 548.508: 9.9519% ( 145) 00:16:36.962 548.508 - 552.070: 10.0961% ( 147) 00:16:36.962 552.070 - 555.631: 10.2589% ( 166) 00:16:36.962 555.631 - 559.193: 10.4070% ( 151) 00:16:36.962 559.193 - 562.755: 10.5668% ( 163) 00:16:36.962 562.755 - 566.317: 10.7120% ( 148) 00:16:36.962 566.317 - 569.878: 10.8836% ( 175) 00:16:36.962 569.878 - 573.440: 11.0228% ( 142) 00:16:36.962 573.440 - 577.002: 11.1729% ( 153) 00:16:36.962 577.002 - 580.563: 11.3190% ( 149) 00:16:36.962 580.563 - 584.125: 11.4691% ( 153) 00:16:36.962 584.125 - 587.687: 11.6034% ( 137) 00:16:36.962 587.687 - 591.249: 11.7848% ( 185) 00:16:36.962 591.249 - 594.810: 11.9457% ( 164) 00:16:36.962 594.810 - 598.372: 12.1055% ( 163) 00:16:36.962 598.372 - 601.934: 12.2673% ( 165) 00:16:36.962 601.934 - 605.496: 12.4458% ( 182) 00:16:36.962 605.496 - 609.057: 12.6204% ( 178) 00:16:36.962 609.057 - 612.619: 12.7979% ( 181) 00:16:36.962 612.619 - 616.181: 12.9646% ( 170) 00:16:36.962 616.181 - 619.743: 13.1323% ( 171) 00:16:36.962 619.743 - 623.304: 13.3186% ( 190) 00:16:36.962 623.304 - 626.866: 13.5157% ( 201) 00:16:36.962 626.866 - 630.428: 13.6952% ( 183) 00:16:36.962 630.428 - 633.990: 13.8649% ( 173) 00:16:36.962 633.990 - 637.551: 14.0482% ( 187) 00:16:36.962 637.551 - 641.113: 14.2199% ( 175) 00:16:36.962 641.113 - 644.675: 14.3797% ( 163) 00:16:36.962 644.675 - 648.237: 14.5435% ( 167) 00:16:36.962 648.237 - 651.798: 14.7239% ( 184) 00:16:36.962 651.798 - 655.360: 14.9152% ( 195) 00:16:36.962 655.360 - 658.922: 15.0829% ( 171) 00:16:36.962 658.922 - 662.483: 15.2859% ( 207) 00:16:36.962 662.483 - 666.045: 15.4830% ( 201) 00:16:36.962 666.045 - 669.607: 15.6566% ( 177) 00:16:36.962 669.607 - 673.169: 15.8409% ( 188) 00:16:36.962 673.169 - 676.730: 16.0224% ( 185) 00:16:36.962 676.730 - 680.292: 16.2254% ( 207) 00:16:36.962 680.292 - 683.854: 16.4156% ( 194) 00:16:36.962 683.854 - 687.416: 16.6392% ( 228) 00:16:36.962 687.416 - 690.977: 16.8628% ( 228) 00:16:36.962 690.977 - 694.539: 17.0638% ( 205) 00:16:36.962 694.539 - 698.101: 17.2786% ( 219) 00:16:36.962 698.101 - 701.663: 17.4924% ( 218) 00:16:36.962 701.663 - 705.224: 17.6807% ( 192) 00:16:36.962 705.224 - 708.786: 17.8680% ( 191) 00:16:36.962 708.786 - 712.348: 18.0818% ( 218) 00:16:36.962 712.348 - 715.910: 18.2642% ( 186) 00:16:36.962 715.910 - 719.471: 18.4574% ( 197) 00:16:36.962 719.471 - 723.033: 18.6829% ( 230) 00:16:36.962 723.033 - 726.595: 18.8938% ( 215) 00:16:36.962 726.595 - 730.157: 19.1066% ( 217) 00:16:36.962 730.157 - 733.718: 19.3302% ( 228) 00:16:36.962 733.718 - 737.280: 19.5067% ( 180) 00:16:36.962 737.280 - 740.842: 19.7401% ( 238) 00:16:36.962 740.842 - 744.403: 19.9500% ( 214) 00:16:36.962 744.403 - 747.965: 20.1461% ( 200) 00:16:36.962 747.965 - 751.527: 20.3344% ( 192) 00:16:36.963 751.527 - 755.089: 20.5413% ( 211) 00:16:36.963 755.089 - 758.650: 20.7306% ( 193) 00:16:36.963 758.650 - 762.212: 20.9307% ( 204) 00:16:36.963 762.212 - 765.774: 21.1513% ( 225) 00:16:36.963 765.774 - 769.336: 21.3524% ( 205) 00:16:36.963 769.336 - 772.897: 21.5760% ( 228) 00:16:36.963 772.897 - 776.459: 21.7927% ( 221) 00:16:36.963 776.459 - 780.021: 21.9976% ( 209) 00:16:36.963 780.021 - 783.583: 22.1771% ( 183) 00:16:36.963 783.583 - 787.144: 22.3889% ( 216) 00:16:36.963 787.144 - 790.706: 22.5949% ( 210) 00:16:36.963 790.706 - 794.268: 22.8195% ( 229) 00:16:36.963 794.268 - 797.830: 23.0176% ( 202) 00:16:36.963 797.830 - 801.391: 23.2274% ( 214) 00:16:36.963 801.391 - 804.953: 23.4324% ( 209) 00:16:36.963 804.953 - 808.515: 23.6422% ( 214) 00:16:36.963 808.515 - 812.077: 23.8619% ( 224) 00:16:36.963 812.077 - 815.638: 24.0973% ( 240) 00:16:36.963 815.638 - 819.200: 24.3042% ( 211) 00:16:36.963 819.200 - 822.762: 24.5170% ( 217) 00:16:36.963 822.762 - 826.323: 24.7504% ( 238) 00:16:36.963 826.323 - 829.885: 24.9897% ( 244) 00:16:36.963 829.885 - 833.447: 25.1701% ( 184) 00:16:36.963 833.447 - 837.009: 25.4232% ( 258) 00:16:36.963 837.009 - 840.570: 25.6507% ( 232) 00:16:36.963 840.570 - 844.132: 25.8880% ( 242) 00:16:36.963 844.132 - 847.694: 26.1057% ( 222) 00:16:36.963 847.694 - 851.256: 26.3254% ( 224) 00:16:36.963 851.256 - 854.817: 26.5353% ( 214) 00:16:36.963 854.817 - 858.379: 26.7432% ( 212) 00:16:36.963 858.379 - 861.941: 26.9687% ( 230) 00:16:36.963 861.941 - 865.503: 27.1825% ( 218) 00:16:36.963 865.503 - 869.064: 27.3845% ( 206) 00:16:36.963 869.064 - 872.626: 27.6091% ( 229) 00:16:36.963 872.626 - 876.188: 27.8415% ( 237) 00:16:36.963 876.188 - 879.750: 28.0465% ( 209) 00:16:36.963 879.750 - 883.311: 28.2789% ( 237) 00:16:36.963 883.311 - 886.873: 28.4917% ( 217) 00:16:36.963 886.873 - 890.435: 28.7114% ( 224) 00:16:36.963 890.435 - 893.997: 28.9350% ( 228) 00:16:36.963 893.997 - 897.558: 29.1615% ( 231) 00:16:36.963 897.558 - 901.120: 29.3841% ( 227) 00:16:36.963 901.120 - 904.682: 29.6028% ( 223) 00:16:36.963 904.682 - 908.243: 29.8137% ( 215) 00:16:36.963 908.243 - 911.805: 30.0618% ( 253) 00:16:36.963 911.805 - 918.929: 30.4992% ( 446) 00:16:36.963 918.929 - 926.052: 30.9385% ( 448) 00:16:36.963 926.052 - 933.176: 31.3671% ( 437) 00:16:36.963 933.176 - 940.299: 31.8094% ( 451) 00:16:36.963 940.299 - 947.423: 32.2703% ( 470) 00:16:36.963 947.423 - 954.546: 32.6724% ( 410) 00:16:36.963 954.546 - 961.670: 33.0685% ( 404) 00:16:36.963 961.670 - 968.793: 33.5393% ( 480) 00:16:36.963 968.793 - 975.917: 33.9924% ( 462) 00:16:36.963 975.917 - 983.040: 34.3944% ( 410) 00:16:36.963 983.040 - 990.163: 34.8328% ( 447) 00:16:36.963 990.163 - 997.287: 35.2672% ( 443) 00:16:36.963 997.287 - 1004.410: 35.7321% ( 474) 00:16:36.963 1004.410 - 1011.534: 36.1783% ( 455) 00:16:36.963 1011.534 - 1018.657: 36.6019% ( 432) 00:16:36.963 1018.657 - 1025.781: 37.0511% ( 458) 00:16:36.963 1025.781 - 1032.904: 37.4287% ( 385) 00:16:36.963 1032.904 - 1040.028: 37.8396% ( 419) 00:16:36.963 1040.028 - 1047.151: 38.2534% ( 422) 00:16:36.963 1047.151 - 1054.275: 38.6771% ( 432) 00:16:36.963 1054.275 - 1061.398: 39.0958% ( 427) 00:16:36.963 1061.398 - 1068.522: 39.4734% ( 385) 00:16:36.963 1068.522 - 1075.645: 39.8872% ( 422) 00:16:36.963 1075.645 - 1082.769: 40.3275% ( 449) 00:16:36.963 1082.769 - 1089.892: 40.7267% ( 407) 00:16:36.963 1089.892 - 1097.016: 41.1062% ( 387) 00:16:36.963 1097.016 - 1104.139: 41.4975% ( 399) 00:16:36.963 1104.139 - 1111.263: 41.8849% ( 395) 00:16:36.963 1111.263 - 1118.386: 42.3066% ( 430) 00:16:36.963 1118.386 - 1125.510: 42.6949% ( 396) 00:16:36.963 1125.510 - 1132.633: 43.0813% ( 394) 00:16:36.963 1132.633 - 1139.757: 43.4922% ( 419) 00:16:36.963 1139.757 - 1146.880: 43.8776% ( 393) 00:16:36.963 1146.880 - 1154.003: 44.3581% ( 490) 00:16:36.963 1154.003 - 1161.127: 44.7720% ( 422) 00:16:36.963 1161.127 - 1168.250: 45.1937% ( 430) 00:16:36.963 1168.250 - 1175.374: 45.6232% ( 438) 00:16:36.963 1175.374 - 1182.497: 46.0567% ( 442) 00:16:36.963 1182.497 - 1189.621: 46.4921% ( 444) 00:16:36.963 1189.621 - 1196.744: 46.9373% ( 454) 00:16:36.963 1196.744 - 1203.868: 47.3894% ( 461) 00:16:36.963 1203.868 - 1210.991: 47.8170% ( 436) 00:16:36.963 1210.991 - 1218.115: 48.2524% ( 444) 00:16:36.963 1218.115 - 1225.238: 48.6986% ( 455) 00:16:36.963 1225.238 - 1232.362: 49.1331% ( 443) 00:16:36.963 1232.362 - 1239.485: 49.5812% ( 457) 00:16:36.963 1239.485 - 1246.609: 50.0333% ( 461) 00:16:36.963 1246.609 - 1253.732: 50.4727% ( 448) 00:16:36.963 1253.732 - 1260.856: 50.8875% ( 423) 00:16:36.963 1260.856 - 1267.979: 51.3043% ( 425) 00:16:36.963 1267.979 - 1275.103: 51.7221% ( 426) 00:16:36.963 1275.103 - 1282.226: 52.1614% ( 448) 00:16:36.963 1282.226 - 1289.350: 52.5998% ( 447) 00:16:36.963 1289.350 - 1296.473: 53.0136% ( 422) 00:16:36.963 1296.473 - 1303.597: 53.4373% ( 432) 00:16:36.963 1303.597 - 1310.720: 53.8609% ( 432) 00:16:36.963 1310.720 - 1317.843: 54.2964% ( 444) 00:16:36.963 1317.843 - 1324.967: 54.7406% ( 453) 00:16:36.963 1324.967 - 1332.090: 55.1858% ( 454) 00:16:36.963 1332.090 - 1339.214: 55.5703% ( 392) 00:16:36.963 1339.214 - 1346.337: 55.9969% ( 435) 00:16:36.963 1346.337 - 1353.461: 56.4460% ( 458) 00:16:36.963 1353.461 - 1360.584: 56.8520% ( 414) 00:16:36.963 1360.584 - 1367.708: 57.2482% ( 404) 00:16:36.963 1367.708 - 1374.831: 57.7278% ( 489) 00:16:36.963 1374.831 - 1381.955: 58.2024% ( 484) 00:16:36.963 1381.955 - 1389.078: 58.6074% ( 413) 00:16:36.963 1389.078 - 1396.202: 59.0281% ( 429) 00:16:36.963 1396.202 - 1403.325: 59.4528% ( 433) 00:16:36.963 1403.325 - 1410.449: 59.8872% ( 443) 00:16:36.963 1410.449 - 1417.572: 60.3315% ( 453) 00:16:36.963 1417.572 - 1424.696: 60.7581% ( 435) 00:16:36.963 1424.696 - 1431.819: 61.1680% ( 418) 00:16:36.963 1431.819 - 1438.943: 61.6015% ( 442) 00:16:36.963 1438.943 - 1446.066: 62.0084% ( 415) 00:16:36.963 1446.066 - 1453.190: 62.4762% ( 477) 00:16:36.963 1453.190 - 1460.313: 62.8812% ( 413) 00:16:36.963 1460.313 - 1467.437: 63.3157% ( 443) 00:16:36.963 1467.437 - 1474.560: 63.7717% ( 465) 00:16:36.963 1474.560 - 1481.683: 64.1787% ( 415) 00:16:36.963 1481.683 - 1488.807: 64.6357% ( 466) 00:16:36.963 1488.807 - 1495.930: 65.0838% ( 457) 00:16:36.963 1495.930 - 1503.054: 65.5095% ( 434) 00:16:36.963 1503.054 - 1510.177: 65.9674% ( 467) 00:16:36.963 1510.177 - 1517.301: 66.3832% ( 424) 00:16:36.963 1517.301 - 1524.424: 66.8540% ( 480) 00:16:36.963 1524.424 - 1531.548: 67.2894% ( 444) 00:16:36.963 1531.548 - 1538.671: 67.7081% ( 427) 00:16:36.963 1538.671 - 1545.795: 68.1504% ( 451) 00:16:36.963 1545.795 - 1552.918: 68.5878% ( 446) 00:16:36.963 1552.918 - 1560.042: 69.0595% ( 481) 00:16:36.963 1560.042 - 1567.165: 69.4959% ( 445) 00:16:36.963 1567.165 - 1574.289: 69.9176% ( 430) 00:16:36.963 1574.289 - 1581.412: 70.3923% ( 484) 00:16:36.963 1581.412 - 1588.536: 70.8110% ( 427) 00:16:36.963 1588.536 - 1595.659: 71.2013% ( 398) 00:16:36.963 1595.659 - 1602.783: 71.6152% ( 422) 00:16:36.963 1602.783 - 1609.906: 72.0290% ( 422) 00:16:36.963 1609.906 - 1617.030: 72.4625% ( 442) 00:16:36.963 1617.030 - 1624.153: 72.8803% ( 426) 00:16:36.964 1624.153 - 1631.277: 73.3059% ( 434) 00:16:36.964 1631.277 - 1638.400: 73.7609% ( 464) 00:16:36.964 1638.400 - 1645.523: 74.1816% ( 429) 00:16:36.964 1645.523 - 1652.647: 74.5955% ( 422) 00:16:36.964 1652.647 - 1659.770: 75.0132% ( 426) 00:16:36.964 1659.770 - 1666.894: 75.4045% ( 399) 00:16:36.964 1666.894 - 1674.017: 75.7978% ( 401) 00:16:36.964 1674.017 - 1681.141: 76.2254% ( 436) 00:16:36.964 1681.141 - 1688.264: 76.6157% ( 398) 00:16:36.964 1688.264 - 1695.388: 77.0070% ( 399) 00:16:36.964 1695.388 - 1702.511: 77.4228% ( 424) 00:16:36.964 1702.511 - 1709.635: 77.8249% ( 410) 00:16:36.964 1709.635 - 1716.758: 78.2024% ( 385) 00:16:36.964 1716.758 - 1723.882: 78.5868% ( 392) 00:16:36.964 1723.882 - 1731.005: 79.0213% ( 443) 00:16:36.964 1731.005 - 1738.129: 79.3851% ( 371) 00:16:36.964 1738.129 - 1745.252: 79.7676% ( 390) 00:16:36.964 1745.252 - 1752.376: 80.1804% ( 421) 00:16:36.964 1752.376 - 1759.499: 80.5472% ( 374) 00:16:36.964 1759.499 - 1766.623: 80.9101% ( 370) 00:16:36.964 1766.623 - 1773.746: 81.2916% ( 389) 00:16:36.964 1773.746 - 1780.870: 81.6387% ( 354) 00:16:36.964 1780.870 - 1787.993: 82.0496% ( 419) 00:16:36.964 1787.993 - 1795.117: 82.4448% ( 403) 00:16:36.964 1795.117 - 1802.240: 82.7900% ( 352) 00:16:36.964 1802.240 - 1809.363: 83.1735% ( 391) 00:16:36.964 1809.363 - 1816.487: 83.5422% ( 376) 00:16:36.964 1816.487 - 1823.610: 83.8796% ( 344) 00:16:36.964 1823.610 - 1837.857: 84.5847% ( 719) 00:16:36.964 1837.857 - 1852.104: 85.2702% ( 699) 00:16:36.964 1852.104 - 1866.351: 85.9106% ( 653) 00:16:36.964 1866.351 - 1880.598: 86.5912% ( 694) 00:16:36.964 1880.598 - 1894.845: 87.2639% ( 686) 00:16:36.964 1894.845 - 1909.092: 87.8827% ( 631) 00:16:36.964 1909.092 - 1923.339: 88.4584% ( 587) 00:16:36.964 1923.339 - 1937.586: 89.0340% ( 587) 00:16:36.964 1937.586 - 1951.833: 89.6048% ( 582) 00:16:36.964 1951.833 - 1966.080: 90.1599% ( 566) 00:16:36.964 1966.080 - 1980.327: 90.6767% ( 527) 00:16:36.964 1980.327 - 1994.574: 91.1827% ( 516) 00:16:36.964 1994.574 - 2008.821: 91.6446% ( 471) 00:16:36.964 2008.821 - 2023.068: 92.1183% ( 483) 00:16:36.964 2023.068 - 2037.315: 92.5478% ( 438) 00:16:36.964 2037.315 - 2051.562: 92.9842% ( 445) 00:16:36.964 2051.562 - 2065.809: 93.3824% ( 406) 00:16:36.964 2065.809 - 2080.056: 93.7648% ( 390) 00:16:36.964 2080.056 - 2094.303: 94.1091% ( 351) 00:16:36.964 2094.303 - 2108.550: 94.4719% ( 370) 00:16:36.964 2108.550 - 2122.797: 94.8073% ( 342) 00:16:36.964 2122.797 - 2137.043: 95.1407% ( 340) 00:16:36.964 2137.043 - 2151.290: 95.4467% ( 312) 00:16:36.964 2151.290 - 2165.537: 95.7546% ( 314) 00:16:36.964 2165.537 - 2179.784: 96.0420% ( 293) 00:16:36.964 2179.784 - 2194.031: 96.3146% ( 278) 00:16:36.964 2194.031 - 2208.278: 96.5529% ( 243) 00:16:36.964 2208.278 - 2222.525: 96.8079% ( 260) 00:16:36.964 2222.525 - 2236.772: 97.0178% ( 214) 00:16:36.964 2236.772 - 2251.019: 97.2266% ( 213) 00:16:36.964 2251.019 - 2265.266: 97.4218% ( 199) 00:16:36.964 2265.266 - 2279.513: 97.6228% ( 205) 00:16:36.964 2279.513 - 2293.760: 97.8023% ( 183) 00:16:36.964 2293.760 - 2308.007: 97.9612% ( 162) 00:16:36.964 2308.007 - 2322.254: 98.1396% ( 182) 00:16:36.964 2322.254 - 2336.501: 98.2485% ( 111) 00:16:36.964 2336.501 - 2350.748: 98.3770% ( 131) 00:16:36.964 2350.748 - 2364.995: 98.4888% ( 114) 00:16:36.964 2364.995 - 2379.242: 98.6015% ( 115) 00:16:36.964 2379.242 - 2393.489: 98.6967% ( 97) 00:16:36.964 2393.489 - 2407.736: 98.7791% ( 84) 00:16:36.964 2407.736 - 2421.983: 98.8820% ( 105) 00:16:36.964 2421.983 - 2436.230: 98.9566% ( 76) 00:16:36.964 2436.230 - 2450.477: 99.0311% ( 76) 00:16:36.964 2450.477 - 2464.723: 99.1095% ( 80) 00:16:36.964 2464.723 - 2478.970: 99.1890% ( 81) 00:16:36.964 2478.970 - 2493.217: 99.2557% ( 68) 00:16:36.964 2493.217 - 2507.464: 99.2939% ( 39) 00:16:36.964 2507.464 - 2521.711: 99.3488% ( 56) 00:16:36.964 2521.711 - 2535.958: 99.4096% ( 62) 00:16:36.964 2535.958 - 2550.205: 99.4734% ( 65) 00:16:36.964 2550.205 - 2564.452: 99.5175% ( 45) 00:16:36.964 2564.452 - 2578.699: 99.5626% ( 46) 00:16:36.964 2578.699 - 2592.946: 99.6009% ( 39) 00:16:36.964 2592.946 - 2607.193: 99.6264% ( 26) 00:16:36.964 2607.193 - 2621.440: 99.6607% ( 35) 00:16:36.964 2621.440 - 2635.687: 99.7048% ( 45) 00:16:36.964 2635.687 - 2649.934: 99.7372% ( 33) 00:16:36.964 2649.934 - 2664.181: 99.7588% ( 22) 00:16:36.964 2664.181 - 2678.428: 99.7784% ( 20) 00:16:36.964 2678.428 - 2692.675: 99.8068% ( 29) 00:16:36.964 2692.675 - 2706.922: 99.8254% ( 19) 00:16:36.964 2706.922 - 2721.169: 99.8441% ( 19) 00:16:36.964 2721.169 - 2735.416: 99.8637% ( 20) 00:16:36.964 2735.416 - 2749.663: 99.8725% ( 9) 00:16:36.964 2749.663 - 2763.910: 99.8843% ( 12) 00:16:36.964 2763.910 - 2778.157: 99.9000% ( 16) 00:16:36.964 2778.157 - 2792.403: 99.9088% ( 9) 00:16:36.964 2792.403 - 2806.650: 99.9245% ( 16) 00:16:36.964 2806.650 - 2820.897: 99.9294% ( 5) 00:16:36.964 2820.897 - 2835.144: 99.9372% ( 8) 00:16:36.964 2835.144 - 2849.391: 99.9431% ( 6) 00:16:36.964 2849.391 - 2863.638: 99.9490% ( 6) 00:16:36.964 2863.638 - 2877.885: 99.9539% ( 5) 00:16:36.964 2877.885 - 2892.132: 99.9578% ( 4) 00:16:36.964 2892.132 - 2906.379: 99.9608% ( 3) 00:16:36.964 2906.379 - 2920.626: 99.9657% ( 5) 00:16:36.964 2920.626 - 2934.873: 99.9706% ( 5) 00:16:36.964 2934.873 - 2949.120: 99.9725% ( 2) 00:16:36.964 2949.120 - 2963.367: 99.9745% ( 2) 00:16:36.964 2963.367 - 2977.614: 99.9804% ( 6) 00:16:36.964 2977.614 - 2991.861: 99.9843% ( 4) 00:16:36.964 2991.861 - 3006.108: 99.9873% ( 3) 00:16:36.964 3006.108 - 3020.355: 99.9892% ( 2) 00:16:36.964 3048.849 - 3063.096: 99.9902% ( 1) 00:16:36.964 3063.096 - 3077.343: 99.9922% ( 2) 00:16:36.964 3134.330 - 3148.577: 99.9941% ( 2) 00:16:36.964 3148.577 - 3162.824: 99.9951% ( 1) 00:16:36.964 3191.318 - 3205.565: 99.9971% ( 2) 00:16:36.964 3248.306 - 3262.553: 99.9980% ( 1) 00:16:36.964 3291.047 - 3305.294: 99.9990% ( 1) 00:16:36.964 3333.788 - 3348.035: 100.0000% ( 1) 00:16:36.964 00:16:36.964 13:48:08 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:16:38.337 Initializing NVMe Controllers 00:16:38.337 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:16:38.337 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:16:38.337 Initialization complete. Launching workers. 00:16:38.337 ======================================================== 00:16:38.337 Latency(us) 00:16:38.337 Device Information : IOPS MiB/s Average min max 00:16:38.337 PCIE (0000:d8:00.0) NSID 1 from core 0: 126437.41 1481.69 1011.54 54.88 2084.45 00:16:38.337 ======================================================== 00:16:38.337 Total : 126437.41 1481.69 1011.54 54.88 2084.45 00:16:38.337 00:16:38.337 Summary latency data for PCIE (0000:d8:00.0) NSID 1 from core 0: 00:16:38.337 ================================================================================= 00:16:38.337 1.00000% : 940.299us 00:16:38.337 10.00000% : 968.793us 00:16:38.337 25.00000% : 990.163us 00:16:38.337 50.00000% : 1018.657us 00:16:38.337 75.00000% : 1040.028us 00:16:38.337 90.00000% : 1061.398us 00:16:38.337 95.00000% : 1068.522us 00:16:38.337 98.00000% : 1082.769us 00:16:38.337 99.00000% : 1089.892us 00:16:38.337 99.50000% : 1146.880us 00:16:38.337 99.90000% : 1852.104us 00:16:38.337 99.99000% : 2008.821us 00:16:38.337 99.99900% : 2094.303us 00:16:38.337 99.99990% : 2094.303us 00:16:38.337 99.99999% : 2094.303us 00:16:38.337 00:16:38.337 Latency histogram for PCIE (0000:d8:00.0) NSID 1 from core 0: 00:16:38.337 ============================================================================== 00:16:38.337 Range in us Cumulative IO count 00:16:38.337 54.762 - 54.984: 0.0008% ( 1) 00:16:38.337 54.984 - 55.207: 0.0016% ( 1) 00:16:38.337 59.214 - 59.659: 0.0032% ( 2) 00:16:38.337 59.659 - 60.104: 0.0063% ( 4) 00:16:38.337 60.104 - 60.550: 0.0071% ( 1) 00:16:38.337 60.550 - 60.995: 0.0079% ( 1) 00:16:38.337 138.908 - 139.798: 0.0111% ( 4) 00:16:38.337 145.141 - 146.031: 0.0134% ( 3) 00:16:38.337 147.812 - 148.703: 0.0158% ( 3) 00:16:38.337 236.856 - 238.637: 0.0166% ( 1) 00:16:38.337 238.637 - 240.417: 0.0198% ( 4) 00:16:38.337 240.417 - 242.198: 0.0237% ( 5) 00:16:38.337 329.461 - 331.242: 0.0261% ( 3) 00:16:38.337 334.803 - 336.584: 0.0269% ( 1) 00:16:38.337 336.584 - 338.365: 0.0316% ( 6) 00:16:38.337 414.943 - 416.723: 0.0324% ( 1) 00:16:38.337 416.723 - 418.504: 0.0332% ( 1) 00:16:38.337 420.285 - 422.066: 0.0395% ( 8) 00:16:38.337 505.767 - 509.329: 0.0403% ( 1) 00:16:38.337 516.452 - 520.014: 0.0538% ( 17) 00:16:38.337 584.125 - 587.687: 0.0545% ( 1) 00:16:38.337 594.810 - 598.372: 0.0617% ( 9) 00:16:38.337 598.372 - 601.934: 0.0632% ( 2) 00:16:38.337 601.934 - 605.496: 0.0688% ( 7) 00:16:38.337 605.496 - 609.057: 0.0696% ( 1) 00:16:38.337 644.675 - 648.237: 0.0711% ( 2) 00:16:38.337 648.237 - 651.798: 0.0719% ( 1) 00:16:38.337 651.798 - 655.360: 0.0727% ( 1) 00:16:38.337 669.607 - 673.169: 0.0806% ( 10) 00:16:38.337 680.292 - 683.854: 0.0838% ( 4) 00:16:38.337 683.854 - 687.416: 0.0877% ( 5) 00:16:38.337 687.416 - 690.977: 0.0885% ( 1) 00:16:38.337 701.663 - 705.224: 0.0893% ( 1) 00:16:38.337 705.224 - 708.786: 0.0901% ( 1) 00:16:38.337 708.786 - 712.348: 0.0917% ( 2) 00:16:38.337 712.348 - 715.910: 0.0933% ( 2) 00:16:38.337 715.910 - 719.471: 0.0980% ( 6) 00:16:38.337 719.471 - 723.033: 0.1004% ( 3) 00:16:38.337 723.033 - 726.595: 0.1020% ( 2) 00:16:38.337 726.595 - 730.157: 0.1051% ( 4) 00:16:38.337 758.650 - 762.212: 0.1075% ( 3) 00:16:38.337 762.212 - 765.774: 0.1130% ( 7) 00:16:38.337 765.774 - 769.336: 0.1138% ( 1) 00:16:38.337 769.336 - 772.897: 0.1154% ( 2) 00:16:38.337 772.897 - 776.459: 0.1233% ( 10) 00:16:38.338 776.459 - 780.021: 0.1273% ( 5) 00:16:38.338 780.021 - 783.583: 0.1320% ( 6) 00:16:38.338 783.583 - 787.144: 0.1328% ( 1) 00:16:38.338 787.144 - 790.706: 0.1367% ( 5) 00:16:38.338 790.706 - 794.268: 0.1383% ( 2) 00:16:38.338 826.323 - 829.885: 0.1407% ( 3) 00:16:38.338 829.885 - 833.447: 0.1439% ( 4) 00:16:38.338 833.447 - 837.009: 0.1454% ( 2) 00:16:38.338 837.009 - 840.570: 0.1486% ( 4) 00:16:38.338 840.570 - 844.132: 0.1510% ( 3) 00:16:38.338 844.132 - 847.694: 0.1533% ( 3) 00:16:38.338 847.694 - 851.256: 0.1549% ( 2) 00:16:38.338 854.817 - 858.379: 0.1628% ( 10) 00:16:38.338 865.503 - 869.064: 0.1676% ( 6) 00:16:38.338 869.064 - 872.626: 0.1707% ( 4) 00:16:38.338 886.873 - 890.435: 0.1715% ( 1) 00:16:38.338 890.435 - 893.997: 0.1723% ( 1) 00:16:38.338 893.997 - 897.558: 0.1739% ( 2) 00:16:38.338 897.558 - 901.120: 0.1771% ( 4) 00:16:38.338 901.120 - 904.682: 0.1802% ( 4) 00:16:38.338 904.682 - 908.243: 0.1905% ( 13) 00:16:38.338 908.243 - 911.805: 0.1937% ( 4) 00:16:38.338 911.805 - 918.929: 0.2055% ( 15) 00:16:38.338 918.929 - 926.052: 0.3178% ( 142) 00:16:38.338 926.052 - 933.176: 0.7422% ( 537) 00:16:38.338 933.176 - 940.299: 1.8212% ( 1365) 00:16:38.338 940.299 - 947.423: 3.6313% ( 2290) 00:16:38.338 947.423 - 954.546: 6.3330% ( 3418) 00:16:38.338 954.546 - 961.670: 9.6569% ( 4205) 00:16:38.338 961.670 - 968.793: 13.5253% ( 4894) 00:16:38.338 968.793 - 975.917: 17.9384% ( 5583) 00:16:38.338 975.917 - 983.040: 23.0849% ( 6511) 00:16:38.338 983.040 - 990.163: 28.7943% ( 7223) 00:16:38.338 990.163 - 997.287: 34.5290% ( 7255) 00:16:38.338 997.287 - 1004.410: 41.3664% ( 8650) 00:16:38.338 1004.410 - 1011.534: 48.4654% ( 8981) 00:16:38.338 1011.534 - 1018.657: 58.7546% ( 13017) 00:16:38.338 1018.657 - 1025.781: 66.7776% ( 10150) 00:16:38.338 1025.781 - 1032.904: 73.7485% ( 8819) 00:16:38.338 1032.904 - 1040.028: 79.6358% ( 7448) 00:16:38.338 1040.028 - 1047.151: 84.9555% ( 6730) 00:16:38.338 1047.151 - 1054.275: 89.1669% ( 5328) 00:16:38.338 1054.275 - 1061.398: 93.0046% ( 4855) 00:16:38.338 1061.398 - 1068.522: 95.6818% ( 3387) 00:16:38.338 1068.522 - 1075.645: 97.6721% ( 2518) 00:16:38.338 1075.645 - 1082.769: 98.8246% ( 1458) 00:16:38.338 1082.769 - 1089.892: 99.3708% ( 691) 00:16:38.338 1089.892 - 1097.016: 99.4419% ( 90) 00:16:38.338 1097.016 - 1104.139: 99.4522% ( 13) 00:16:38.338 1104.139 - 1111.263: 99.4664% ( 18) 00:16:38.338 1111.263 - 1118.386: 99.4759% ( 12) 00:16:38.338 1118.386 - 1125.510: 99.4838% ( 10) 00:16:38.338 1125.510 - 1132.633: 99.4941% ( 13) 00:16:38.338 1132.633 - 1139.757: 99.4989% ( 6) 00:16:38.338 1139.757 - 1146.880: 99.5068% ( 10) 00:16:38.338 1146.880 - 1154.003: 99.5186% ( 15) 00:16:38.338 1154.003 - 1161.127: 99.5305% ( 15) 00:16:38.338 1161.127 - 1168.250: 99.5392% ( 11) 00:16:38.338 1168.250 - 1175.374: 99.5447% ( 7) 00:16:38.338 1175.374 - 1182.497: 99.5574% ( 16) 00:16:38.338 1182.497 - 1189.621: 99.5708% ( 17) 00:16:38.338 1189.621 - 1196.744: 99.5826% ( 15) 00:16:38.338 1196.744 - 1203.868: 99.5905% ( 10) 00:16:38.338 1203.868 - 1210.991: 99.6000% ( 12) 00:16:38.338 1210.991 - 1218.115: 99.6103% ( 13) 00:16:38.338 1218.115 - 1225.238: 99.6206% ( 13) 00:16:38.338 1225.238 - 1232.362: 99.6380% ( 22) 00:16:38.338 1232.362 - 1239.485: 99.6522% ( 18) 00:16:38.338 1239.485 - 1246.609: 99.6609% ( 11) 00:16:38.338 1246.609 - 1253.732: 99.6791% ( 23) 00:16:38.338 1253.732 - 1260.856: 99.6925% ( 17) 00:16:38.338 1260.856 - 1267.979: 99.7028% ( 13) 00:16:38.338 1267.979 - 1275.103: 99.7115% ( 11) 00:16:38.338 1275.103 - 1282.226: 99.7241% ( 16) 00:16:38.338 1282.226 - 1289.350: 99.7352% ( 14) 00:16:38.338 1289.350 - 1296.473: 99.7423% ( 9) 00:16:38.338 1296.473 - 1303.597: 99.7486% ( 8) 00:16:38.338 1303.597 - 1310.720: 99.7605% ( 15) 00:16:38.338 1310.720 - 1317.843: 99.7708% ( 13) 00:16:38.338 1317.843 - 1324.967: 99.7771% ( 8) 00:16:38.338 1324.967 - 1332.090: 99.7826% ( 7) 00:16:38.338 1332.090 - 1339.214: 99.7874% ( 6) 00:16:38.338 1339.214 - 1346.337: 99.8000% ( 16) 00:16:38.338 1346.337 - 1353.461: 99.8024% ( 3) 00:16:38.338 1353.461 - 1360.584: 99.8032% ( 1) 00:16:38.338 1360.584 - 1367.708: 99.8040% ( 1) 00:16:38.338 1367.708 - 1374.831: 99.8063% ( 3) 00:16:38.338 1389.078 - 1396.202: 99.8111% ( 6) 00:16:38.338 1396.202 - 1403.325: 99.8150% ( 5) 00:16:38.338 1424.696 - 1431.819: 99.8166% ( 2) 00:16:38.338 1431.819 - 1438.943: 99.8229% ( 8) 00:16:38.338 1438.943 - 1446.066: 99.8237% ( 1) 00:16:38.338 1460.313 - 1467.437: 99.8253% ( 2) 00:16:38.338 1467.437 - 1474.560: 99.8293% ( 5) 00:16:38.338 1517.301 - 1524.424: 99.8316% ( 3) 00:16:38.338 1524.424 - 1531.548: 99.8380% ( 8) 00:16:38.338 1602.783 - 1609.906: 99.8467% ( 11) 00:16:38.338 1609.906 - 1617.030: 99.8474% ( 1) 00:16:38.338 1681.141 - 1688.264: 99.8530% ( 7) 00:16:38.338 1688.264 - 1695.388: 99.8561% ( 4) 00:16:38.338 1759.499 - 1766.623: 99.8633% ( 9) 00:16:38.338 1766.623 - 1773.746: 99.8656% ( 3) 00:16:38.338 1780.870 - 1787.993: 99.8664% ( 1) 00:16:38.338 1787.993 - 1795.117: 99.8672% ( 1) 00:16:38.338 1809.363 - 1816.487: 99.8680% ( 1) 00:16:38.338 1816.487 - 1823.610: 99.8712% ( 4) 00:16:38.338 1823.610 - 1837.857: 99.8830% ( 15) 00:16:38.338 1837.857 - 1852.104: 99.9044% ( 27) 00:16:38.338 1852.104 - 1866.351: 99.9170% ( 16) 00:16:38.338 1866.351 - 1880.598: 99.9352% ( 23) 00:16:38.338 1880.598 - 1894.845: 99.9534% ( 23) 00:16:38.338 1894.845 - 1909.092: 99.9676% ( 18) 00:16:38.338 1909.092 - 1923.339: 99.9842% ( 21) 00:16:38.338 1994.574 - 2008.821: 99.9929% ( 11) 00:16:38.338 2080.056 - 2094.303: 100.0000% ( 9) 00:16:38.338 00:16:38.338 13:48:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:16:38.338 00:16:38.338 real 0m2.690s 00:16:38.338 user 0m2.205s 00:16:38.338 sys 0m0.374s 00:16:38.338 13:48:09 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.338 13:48:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:16:38.338 ************************************ 00:16:38.338 END TEST nvme_perf 00:16:38.338 ************************************ 00:16:38.338 13:48:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0 00:16:38.338 13:48:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:38.338 13:48:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.338 13:48:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.338 ************************************ 00:16:38.338 START TEST nvme_hello_world 00:16:38.338 ************************************ 00:16:38.338 13:48:09 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0 00:16:38.596 Initializing NVMe Controllers 00:16:38.596 Attached to 0000:d8:00.0 00:16:38.596 Namespace ID: 1 size: 4000GB 00:16:38.596 Initialization complete. 00:16:38.596 INFO: using host memory buffer for IO 00:16:38.596 Hello world! 00:16:38.596 00:16:38.596 real 0m0.337s 00:16:38.596 user 0m0.100s 00:16:38.596 sys 0m0.188s 00:16:38.597 13:48:09 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.597 13:48:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:38.597 ************************************ 00:16:38.597 END TEST nvme_hello_world 00:16:38.597 ************************************ 00:16:38.597 13:48:10 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl 00:16:38.597 13:48:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.597 13:48:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.597 13:48:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.597 ************************************ 00:16:38.597 START TEST nvme_sgl 00:16:38.597 ************************************ 00:16:38.597 13:48:10 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl 00:16:39.160 NVMe Readv/Writev Request test 00:16:39.160 Attached to 0000:d8:00.0 00:16:39.160 0000:d8:00.0: build_io_request_0 test passed 00:16:39.160 0000:d8:00.0: build_io_request_1 test passed 00:16:39.160 0000:d8:00.0: build_io_request_2 test passed 00:16:39.160 0000:d8:00.0: build_io_request_3 test passed 00:16:39.160 0000:d8:00.0: build_io_request_4 test passed 00:16:39.160 0000:d8:00.0: build_io_request_5 test passed 00:16:39.160 0000:d8:00.0: build_io_request_6 test passed 00:16:39.160 0000:d8:00.0: build_io_request_7 test passed 00:16:39.160 0000:d8:00.0: build_io_request_8 test passed 00:16:39.160 0000:d8:00.0: build_io_request_9 test passed 00:16:39.160 0000:d8:00.0: build_io_request_10 test passed 00:16:39.160 0000:d8:00.0: build_io_request_11 test passed 00:16:39.160 Cleaning up... 00:16:39.160 00:16:39.160 real 0m0.395s 00:16:39.160 user 0m0.155s 00:16:39.160 sys 0m0.184s 00:16:39.160 13:48:10 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.160 13:48:10 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:16:39.160 ************************************ 00:16:39.160 END TEST nvme_sgl 00:16:39.160 ************************************ 00:16:39.160 13:48:10 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp 00:16:39.160 13:48:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.160 13:48:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.160 13:48:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.160 ************************************ 00:16:39.160 START TEST nvme_e2edp 00:16:39.160 ************************************ 00:16:39.160 13:48:10 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp 00:16:39.417 NVMe Write/Read with End-to-End data protection test 00:16:39.417 Attached to 0000:d8:00.0 00:16:39.417 Cleaning up... 00:16:39.417 00:16:39.417 real 0m0.324s 00:16:39.417 user 0m0.097s 00:16:39.417 sys 0m0.168s 00:16:39.417 13:48:10 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.417 13:48:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:16:39.417 ************************************ 00:16:39.417 END TEST nvme_e2edp 00:16:39.417 ************************************ 00:16:39.417 13:48:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve 00:16:39.417 13:48:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.417 13:48:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.417 13:48:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.673 ************************************ 00:16:39.673 START TEST nvme_reserve 00:16:39.673 ************************************ 00:16:39.673 13:48:10 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve 00:16:39.931 ===================================================== 00:16:39.931 NVMe Controller at PCI bus 216, device 0, function 0 00:16:39.931 ===================================================== 00:16:39.931 Reservations: Not Supported 00:16:39.931 Reservation test passed 00:16:39.931 00:16:39.931 real 0m0.320s 00:16:39.931 user 0m0.090s 00:16:39.931 sys 0m0.188s 00:16:39.931 13:48:11 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.931 13:48:11 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:16:39.931 ************************************ 00:16:39.931 END TEST nvme_reserve 00:16:39.931 ************************************ 00:16:39.931 13:48:11 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection 00:16:39.931 13:48:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.931 13:48:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.931 13:48:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.931 ************************************ 00:16:39.931 START TEST nvme_err_injection 00:16:39.931 ************************************ 00:16:39.931 13:48:11 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection 00:16:40.189 NVMe Error Injection test 00:16:40.189 Attached to 0000:d8:00.0 00:16:40.189 0000:d8:00.0: get features failed as expected 00:16:40.189 0000:d8:00.0: get features successfully as expected 00:16:40.189 0000:d8:00.0: read failed as expected 00:16:40.189 0000:d8:00.0: read successfully as expected 00:16:40.189 Cleaning up... 00:16:40.447 00:16:40.447 real 0m0.330s 00:16:40.447 user 0m0.101s 00:16:40.447 sys 0m0.187s 00:16:40.447 13:48:11 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.447 13:48:11 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:16:40.447 ************************************ 00:16:40.447 END TEST nvme_err_injection 00:16:40.447 ************************************ 00:16:40.447 13:48:11 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:40.447 13:48:11 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:16:40.447 13:48:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.447 13:48:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.447 ************************************ 00:16:40.447 START TEST nvme_overhead 00:16:40.447 ************************************ 00:16:40.447 13:48:11 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:41.822 Initializing NVMe Controllers 00:16:41.822 Attached to 0000:d8:00.0 00:16:41.822 Initialization complete. Launching workers. 00:16:41.822 submit (in ns) avg, min, max = 4908.1, 4590.4, 567958.3 00:16:41.822 complete (in ns) avg, min, max = 2862.7, 2764.3, 66578.3 00:16:41.822 00:16:41.822 Submit histogram 00:16:41.822 ================ 00:16:41.822 Range in us Cumulative Count 00:16:41.822 4.563 - 4.591: 0.0012% ( 1) 00:16:41.822 4.591 - 4.619: 0.1575% ( 134) 00:16:41.822 4.619 - 4.647: 2.2337% ( 1780) 00:16:41.822 4.647 - 4.675: 8.0110% ( 4953) 00:16:41.822 4.675 - 4.703: 14.5430% ( 5600) 00:16:41.822 4.703 - 4.730: 21.1100% ( 5630) 00:16:41.822 4.730 - 4.758: 27.4798% ( 5461) 00:16:41.822 4.758 - 4.786: 33.4391% ( 5109) 00:16:41.822 4.786 - 4.814: 38.9726% ( 4744) 00:16:41.822 4.814 - 4.842: 45.2398% ( 5373) 00:16:41.822 4.842 - 4.870: 52.4740% ( 6202) 00:16:41.822 4.870 - 4.897: 57.6063% ( 4400) 00:16:41.822 4.897 - 4.925: 64.8451% ( 6206) 00:16:41.822 4.925 - 4.953: 70.7472% ( 5060) 00:16:41.822 4.953 - 4.981: 75.2345% ( 3847) 00:16:41.822 4.981 - 5.009: 78.6206% ( 2903) 00:16:41.822 5.009 - 5.037: 81.5856% ( 2542) 00:16:41.822 5.037 - 5.064: 84.7070% ( 2676) 00:16:41.822 5.064 - 5.092: 88.1549% ( 2956) 00:16:41.822 5.092 - 5.120: 91.4641% ( 2837) 00:16:41.822 5.120 - 5.148: 93.8366% ( 2034) 00:16:41.822 5.148 - 5.176: 95.4964% ( 1423) 00:16:41.822 5.176 - 5.203: 96.7947% ( 1113) 00:16:41.823 5.203 - 5.231: 97.9249% ( 969) 00:16:41.823 5.231 - 5.259: 98.7951% ( 746) 00:16:41.823 5.259 - 5.287: 99.2208% ( 365) 00:16:41.823 5.287 - 5.315: 99.3958% ( 150) 00:16:41.823 5.315 - 5.343: 99.4448% ( 42) 00:16:41.823 5.343 - 5.370: 99.4588% ( 12) 00:16:41.823 5.370 - 5.398: 99.4611% ( 2) 00:16:41.823 7.346 - 7.402: 99.4623% ( 1) 00:16:41.823 7.457 - 7.513: 99.4646% ( 2) 00:16:41.823 7.513 - 7.569: 99.4739% ( 8) 00:16:41.823 7.569 - 7.624: 99.4786% ( 4) 00:16:41.823 7.624 - 7.680: 99.4984% ( 17) 00:16:41.823 7.680 - 7.736: 99.5136% ( 13) 00:16:41.823 7.736 - 7.791: 99.5323% ( 16) 00:16:41.823 7.791 - 7.847: 99.5428% ( 9) 00:16:41.823 7.847 - 7.903: 99.5568% ( 12) 00:16:41.823 7.903 - 7.958: 99.5813% ( 21) 00:16:41.823 7.958 - 8.014: 99.6011% ( 17) 00:16:41.823 8.014 - 8.070: 99.6221% ( 18) 00:16:41.823 8.070 - 8.125: 99.6501% ( 24) 00:16:41.823 8.125 - 8.181: 99.6746% ( 21) 00:16:41.823 8.181 - 8.237: 99.7026% ( 24) 00:16:41.823 8.237 - 8.292: 99.7352% ( 28) 00:16:41.823 8.292 - 8.348: 99.7504% ( 13) 00:16:41.823 8.348 - 8.403: 99.7667% ( 14) 00:16:41.823 8.403 - 8.459: 99.7842% ( 15) 00:16:41.823 8.459 - 8.515: 99.8017% ( 15) 00:16:41.823 8.515 - 8.570: 99.8192% ( 15) 00:16:41.823 8.570 - 8.626: 99.8297% ( 9) 00:16:41.823 8.626 - 8.682: 99.8437% ( 12) 00:16:41.823 8.682 - 8.737: 99.8460% ( 2) 00:16:41.823 8.737 - 8.793: 99.8530% ( 6) 00:16:41.823 8.793 - 8.849: 99.8600% ( 6) 00:16:41.823 8.849 - 8.904: 99.8647% ( 4) 00:16:41.823 8.904 - 8.960: 99.8682% ( 3) 00:16:41.823 8.960 - 9.016: 99.8717% ( 3) 00:16:41.823 9.016 - 9.071: 99.8764% ( 4) 00:16:41.823 9.071 - 9.127: 99.8775% ( 1) 00:16:41.823 9.127 - 9.183: 99.8822% ( 4) 00:16:41.823 9.183 - 9.238: 99.8834% ( 1) 00:16:41.823 9.294 - 9.350: 99.8880% ( 4) 00:16:41.823 9.350 - 9.405: 99.8904% ( 2) 00:16:41.823 9.405 - 9.461: 99.8962% ( 5) 00:16:41.823 9.461 - 9.517: 99.8985% ( 2) 00:16:41.823 9.517 - 9.572: 99.8997% ( 1) 00:16:41.823 9.572 - 9.628: 99.9067% ( 6) 00:16:41.823 9.628 - 9.683: 99.9137% ( 6) 00:16:41.823 9.683 - 9.739: 99.9149% ( 1) 00:16:41.823 9.739 - 9.795: 99.9184% ( 3) 00:16:41.823 9.795 - 9.850: 99.9253% ( 6) 00:16:41.823 9.850 - 9.906: 99.9277% ( 2) 00:16:41.823 9.906 - 9.962: 99.9312% ( 3) 00:16:41.823 9.962 - 10.017: 99.9358% ( 4) 00:16:41.823 10.017 - 10.073: 99.9393% ( 3) 00:16:41.823 10.073 - 10.129: 99.9417% ( 2) 00:16:41.823 10.129 - 10.184: 99.9452% ( 3) 00:16:41.823 10.184 - 10.240: 99.9487% ( 3) 00:16:41.823 10.240 - 10.296: 99.9533% ( 4) 00:16:41.823 10.296 - 10.351: 99.9580% ( 4) 00:16:41.823 10.351 - 10.407: 99.9627% ( 4) 00:16:41.823 10.407 - 10.463: 99.9650% ( 2) 00:16:41.823 10.463 - 10.518: 99.9673% ( 2) 00:16:41.823 10.518 - 10.574: 99.9708% ( 3) 00:16:41.823 10.574 - 10.630: 99.9732% ( 2) 00:16:41.823 10.630 - 10.685: 99.9743% ( 1) 00:16:41.823 10.685 - 10.741: 99.9767% ( 2) 00:16:41.823 10.741 - 10.797: 99.9778% ( 1) 00:16:41.823 10.797 - 10.852: 99.9790% ( 1) 00:16:41.823 11.019 - 11.075: 99.9802% ( 1) 00:16:41.823 11.130 - 11.186: 99.9813% ( 1) 00:16:41.823 11.464 - 11.520: 99.9825% ( 1) 00:16:41.823 11.743 - 11.798: 99.9837% ( 1) 00:16:41.823 11.854 - 11.910: 99.9848% ( 1) 00:16:41.823 12.077 - 12.132: 99.9860% ( 1) 00:16:41.823 12.243 - 12.299: 99.9872% ( 1) 00:16:41.823 12.410 - 12.466: 99.9883% ( 1) 00:16:41.823 12.522 - 12.577: 99.9895% ( 1) 00:16:41.823 12.967 - 13.023: 99.9907% ( 1) 00:16:41.823 13.412 - 13.468: 99.9918% ( 1) 00:16:41.823 14.692 - 14.803: 99.9930% ( 1) 00:16:41.823 16.139 - 16.250: 99.9942% ( 1) 00:16:41.823 17.475 - 17.586: 99.9953% ( 1) 00:16:41.823 19.033 - 19.144: 99.9965% ( 1) 00:16:41.823 242.198 - 243.979: 99.9977% ( 1) 00:16:41.823 243.979 - 245.760: 99.9988% ( 1) 00:16:41.823 566.317 - 569.878: 100.0000% ( 1) 00:16:41.823 00:16:41.823 Complete histogram 00:16:41.823 ================== 00:16:41.823 Range in us Cumulative Count 00:16:41.823 2.755 - 2.769: 0.0035% ( 3) 00:16:41.823 2.769 - 2.783: 0.1423% ( 119) 00:16:41.823 2.783 - 2.797: 2.6303% ( 2133) 00:16:41.823 2.797 - 2.810: 16.4559% ( 11853) 00:16:41.823 2.810 - 2.824: 42.3074% ( 22163) 00:16:41.823 2.824 - 2.838: 65.6791% ( 20037) 00:16:41.823 2.838 - 2.852: 78.0222% ( 10582) 00:16:41.823 2.852 - 2.866: 83.1148% ( 4366) 00:16:41.823 2.866 - 2.880: 85.9422% ( 2424) 00:16:41.823 2.880 - 2.894: 88.6857% ( 2352) 00:16:41.823 2.894 - 2.908: 90.9299% ( 1924) 00:16:41.823 2.908 - 2.922: 92.2199% ( 1106) 00:16:41.823 2.922 - 2.936: 92.9735% ( 646) 00:16:41.823 2.936 - 2.950: 93.3910% ( 358) 00:16:41.823 2.950 - 2.963: 93.5427% ( 130) 00:16:41.823 2.963 - 2.977: 93.6278% ( 73) 00:16:41.823 2.977 - 2.991: 93.7830% ( 133) 00:16:41.823 2.991 - 3.005: 94.2145% ( 370) 00:16:41.823 3.005 - 3.019: 95.2736% ( 908) 00:16:41.823 3.019 - 3.033: 97.0419% ( 1516) 00:16:41.823 3.033 - 3.047: 98.5653% ( 1306) 00:16:41.823 3.047 - 3.061: 99.2885% ( 620) 00:16:41.823 3.061 - 3.075: 99.4821% ( 166) 00:16:41.823 3.075 - 3.089: 99.5229% ( 35) 00:16:41.823 3.089 - 3.103: 99.5276% ( 4) 00:16:41.823 3.130 - 3.144: 99.5288% ( 1) 00:16:41.823 3.478 - 3.492: 99.5299% ( 1) 00:16:41.823 3.562 - 3.590: 99.5311% ( 1) 00:16:41.823 5.537 - 5.565: 99.5323% ( 1) 00:16:41.823 5.621 - 5.649: 99.5358% ( 3) 00:16:41.823 5.649 - 5.677: 99.5416% ( 5) 00:16:41.823 5.677 - 5.704: 99.5451% ( 3) 00:16:41.823 5.704 - 5.732: 99.5498% ( 4) 00:16:41.823 5.732 - 5.760: 99.5614% ( 10) 00:16:41.823 5.760 - 5.788: 99.5801% ( 16) 00:16:41.823 5.788 - 5.816: 99.5918% ( 10) 00:16:41.823 5.816 - 5.843: 99.5987% ( 6) 00:16:41.823 5.843 - 5.871: 99.6069% ( 7) 00:16:41.823 5.871 - 5.899: 99.6092% ( 2) 00:16:41.823 5.899 - 5.927: 99.6104% ( 1) 00:16:41.823 5.927 - 5.955: 99.6186% ( 7) 00:16:41.823 5.955 - 5.983: 99.6232% ( 4) 00:16:41.823 5.983 - 6.010: 99.6267% ( 3) 00:16:41.823 6.010 - 6.038: 99.6396% ( 11) 00:16:41.823 6.038 - 6.066: 99.6512% ( 10) 00:16:41.823 6.066 - 6.094: 99.6687% ( 15) 00:16:41.823 6.094 - 6.122: 99.6827% ( 12) 00:16:41.823 6.122 - 6.150: 99.7049% ( 19) 00:16:41.823 6.150 - 6.177: 99.7271% ( 19) 00:16:41.823 6.177 - 6.205: 99.7469% ( 17) 00:16:41.823 6.205 - 6.233: 99.7760% ( 25) 00:16:41.823 6.233 - 6.261: 99.7854% ( 8) 00:16:41.823 6.261 - 6.289: 99.7947% ( 8) 00:16:41.823 6.289 - 6.317: 99.8064% ( 10) 00:16:41.823 6.317 - 6.344: 99.8099% ( 3) 00:16:41.823 6.344 - 6.372: 99.8169% ( 6) 00:16:41.823 6.372 - 6.400: 99.8204% ( 3) 00:16:41.823 6.400 - 6.428: 99.8274% ( 6) 00:16:41.823 6.428 - 6.456: 99.8379% ( 9) 00:16:41.823 6.456 - 6.483: 99.8402% ( 2) 00:16:41.823 6.483 - 6.511: 99.8460% ( 5) 00:16:41.823 6.511 - 6.539: 99.8495% ( 3) 00:16:41.823 6.539 - 6.567: 99.8542% ( 4) 00:16:41.823 6.567 - 6.595: 99.8577% ( 3) 00:16:41.823 6.595 - 6.623: 99.8624% ( 4) 00:16:41.823 6.623 - 6.650: 99.8705% ( 7) 00:16:41.823 6.650 - 6.678: 99.8729% ( 2) 00:16:41.823 6.734 - 6.762: 99.8740% ( 1) 00:16:41.823 6.762 - 6.790: 99.8810% ( 6) 00:16:41.823 6.790 - 6.817: 99.8834% ( 2) 00:16:41.823 6.817 - 6.845: 99.8869% ( 3) 00:16:41.823 6.845 - 6.873: 99.8880% ( 1) 00:16:41.823 6.873 - 6.901: 99.8904% ( 2) 00:16:41.823 6.901 - 6.929: 99.8927% ( 2) 00:16:41.823 6.984 - 7.012: 99.8950% ( 2) 00:16:41.823 7.012 - 7.040: 99.8962% ( 1) 00:16:41.824 7.040 - 7.068: 99.8985% ( 2) 00:16:41.824 7.068 - 7.096: 99.9009% ( 2) 00:16:41.824 7.123 - 7.179: 99.9032% ( 2) 00:16:41.824 7.179 - 7.235: 99.9079% ( 4) 00:16:41.824 7.235 - 7.290: 99.9125% ( 4) 00:16:41.824 7.290 - 7.346: 99.9137% ( 1) 00:16:41.824 7.346 - 7.402: 99.9172% ( 3) 00:16:41.824 7.402 - 7.457: 99.9218% ( 4) 00:16:41.824 7.513 - 7.569: 99.9253% ( 3) 00:16:41.824 7.569 - 7.624: 99.9288% ( 3) 00:16:41.824 7.624 - 7.680: 99.9323% ( 3) 00:16:41.824 7.680 - 7.736: 99.9382% ( 5) 00:16:41.824 7.736 - 7.791: 99.9393% ( 1) 00:16:41.824 7.791 - 7.847: 99.9428% ( 3) 00:16:41.824 7.847 - 7.903: 99.9452% ( 2) 00:16:41.824 7.903 - 7.958: 99.9510% ( 5) 00:16:41.824 7.958 - 8.014: 99.9533% ( 2) 00:16:41.824 8.014 - 8.070: 99.9568% ( 3) 00:16:41.824 8.070 - 8.125: 99.9603% ( 3) 00:16:41.824 8.125 - 8.181: 99.9627% ( 2) 00:16:41.824 8.181 - 8.237: 99.9650% ( 2) 00:16:41.824 8.237 - 8.292: 99.9662% ( 1) 00:16:41.824 8.292 - 8.348: 99.9697% ( 3) 00:16:41.824 8.348 - 8.403: 99.9708% ( 1) 00:16:41.824 8.403 - 8.459: 99.9732% ( 2) 00:16:41.824 8.459 - 8.515: 99.9755% ( 2) 00:16:41.824 8.515 - 8.570: 99.9778% ( 2) 00:16:41.824 8.626 - 8.682: 99.9813% ( 3) 00:16:41.824 8.682 - 8.737: 99.9837% ( 2) 00:16:41.824 8.737 - 8.793: 99.9848% ( 1) 00:16:41.824 9.016 - 9.071: 99.9860% ( 1) 00:16:41.824 9.183 - 9.238: 99.9883% ( 2) 00:16:41.824 9.238 - 9.294: 99.9895% ( 1) 00:16:41.824 9.461 - 9.517: 99.9918% ( 2) 00:16:41.824 11.186 - 11.242: 99.9930% ( 1) 00:16:41.824 11.409 - 11.464: 99.9942% ( 1) 00:16:41.824 13.078 - 13.134: 99.9953% ( 1) 00:16:41.824 13.412 - 13.468: 99.9965% ( 1) 00:16:41.824 15.471 - 15.583: 99.9977% ( 1) 00:16:41.824 26.379 - 26.490: 99.9988% ( 1) 00:16:41.824 66.337 - 66.783: 100.0000% ( 1) 00:16:41.824 00:16:41.824 00:16:41.824 real 0m1.328s 00:16:41.824 user 0m1.097s 00:16:41.824 sys 0m0.171s 00:16:41.824 13:48:13 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.824 13:48:13 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:16:41.824 ************************************ 00:16:41.824 END TEST nvme_overhead 00:16:41.824 ************************************ 00:16:41.824 13:48:13 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0 00:16:41.824 13:48:13 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:16:41.824 13:48:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.824 13:48:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.824 ************************************ 00:16:41.824 START TEST nvme_arbitration 00:16:41.824 ************************************ 00:16:41.824 13:48:13 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0 00:16:45.103 Initializing NVMe Controllers 00:16:45.103 Attached to 0000:d8:00.0 00:16:45.103 Associating INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) with lcore 0 00:16:45.103 Associating INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) with lcore 1 00:16:45.103 Associating INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) with lcore 2 00:16:45.103 Associating INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) with lcore 3 00:16:45.103 /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:45.103 /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:16:45.103 Initialization complete. Launching workers. 00:16:45.103 Starting thread on core 1 with urgent priority queue 00:16:45.103 Starting thread on core 2 with urgent priority queue 00:16:45.103 Starting thread on core 3 with urgent priority queue 00:16:45.103 Starting thread on core 0 with urgent priority queue 00:16:45.103 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) core 0: 10218.67 IO/s 9.79 secs/100000 ios 00:16:45.104 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) core 1: 12859.67 IO/s 7.78 secs/100000 ios 00:16:45.104 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) core 2: 6677.33 IO/s 14.98 secs/100000 ios 00:16:45.104 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) core 3: 6634.67 IO/s 15.07 secs/100000 ios 00:16:45.104 ======================================================== 00:16:45.104 00:16:45.104 00:16:45.104 real 0m3.362s 00:16:45.104 user 0m9.127s 00:16:45.104 sys 0m0.180s 00:16:45.104 13:48:16 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.104 13:48:16 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:16:45.104 ************************************ 00:16:45.104 END TEST nvme_arbitration 00:16:45.104 ************************************ 00:16:45.104 13:48:16 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 00:16:45.104 13:48:16 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:45.104 13:48:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.104 13:48:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.361 ************************************ 00:16:45.361 START TEST nvme_single_aen 00:16:45.361 ************************************ 00:16:45.361 13:48:16 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 00:16:45.618 [2024-12-05 13:48:16.989644] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3884860) is not found. Dropping the request. 00:16:45.618 [2024-12-05 13:48:16.989703] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3884860) is not found. Dropping the request. 00:16:45.618 [2024-12-05 13:48:16.989721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3884860) is not found. Dropping the request. 00:16:45.618 [2024-12-05 13:48:16.989736] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3884860) is not found. Dropping the request. 00:16:50.872 Asynchronous Event Request test 00:16:50.872 Attached to 0000:d8:00.0 00:16:50.872 Reset controller to setup AER completions for this process 00:16:50.872 Registering asynchronous event callbacks... 00:16:50.872 Getting orig temperature thresholds of all controllers 00:16:50.872 0000:d8:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:50.872 Setting all controllers temperature threshold low to trigger AER 00:16:50.872 Waiting for all controllers temperature threshold to be set lower 00:16:50.872 Waiting for all controllers to trigger AER and reset threshold 00:16:50.872 0000:d8:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:50.872 aer_cb - Resetting Temp Threshold for device: 0000:d8:00.0 00:16:50.872 0000:d8:00.0: Current Temperature: 308 Kelvin (35 Celsius) 00:16:50.872 Cleaning up... 00:16:50.872 00:16:50.872 real 0m4.797s 00:16:50.872 user 0m3.919s 00:16:50.872 sys 0m0.798s 00:16:50.872 13:48:21 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.872 13:48:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:16:50.872 ************************************ 00:16:50.872 END TEST nvme_single_aen 00:16:50.872 ************************************ 00:16:50.872 13:48:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:16:50.872 13:48:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.872 13:48:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.872 13:48:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.872 ************************************ 00:16:50.872 START TEST nvme_doorbell_aers 00:16:50.872 ************************************ 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:50.872 13:48:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:d8:00.0' 00:16:50.872 [2024-12-05 13:48:22.056828] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3888950) is not found. Dropping the request. 00:17:00.839 Executing: test_write_invalid_db 00:17:00.839 Waiting for AER completion... 00:17:00.839 Failure: test_write_invalid_db 00:17:00.839 00:17:00.839 Executing: test_invalid_db_write_overflow_sq 00:17:00.839 Waiting for AER completion... 00:17:00.839 Failure: test_invalid_db_write_overflow_sq 00:17:00.839 00:17:00.839 Executing: test_invalid_db_write_overflow_cq 00:17:00.839 Waiting for AER completion... 00:17:00.839 Failure: test_invalid_db_write_overflow_cq 00:17:00.839 00:17:00.839 00:17:00.839 real 0m10.127s 00:17:00.839 user 0m7.410s 00:17:00.839 sys 0m2.614s 00:17:00.839 13:48:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.839 13:48:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:17:00.839 ************************************ 00:17:00.839 END TEST nvme_doorbell_aers 00:17:00.839 ************************************ 00:17:00.839 13:48:31 nvme -- nvme/nvme.sh@97 -- # uname 00:17:00.839 13:48:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:17:00.840 13:48:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 00:17:00.840 13:48:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:17:00.840 13:48:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.840 13:48:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.840 ************************************ 00:17:00.840 START TEST nvme_multi_aen 00:17:00.840 ************************************ 00:17:00.840 13:48:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 00:17:00.840 [2024-12-05 13:48:32.080174] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3888950) is not found. Dropping the request. 00:17:00.840 [2024-12-05 13:48:32.080234] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3888950) is not found. Dropping the request. 00:17:00.840 [2024-12-05 13:48:32.080252] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3888950) is not found. Dropping the request. 00:17:00.840 Child process pid: 3890945 00:17:05.018 [Child] Asynchronous Event Request test 00:17:05.018 [Child] Attached to 0000:d8:00.0 00:17:05.018 [Child] Registering asynchronous event callbacks... 00:17:05.018 [Child] Getting orig temperature thresholds of all controllers 00:17:05.018 [Child] 0000:d8:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:05.018 [Child] Waiting for all controllers to trigger AER and reset threshold 00:17:05.018 [Child] 0000:d8:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:05.018 [Child] 0000:d8:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:05.018 [Child] 0000:d8:00.0: Current Temperature: 308 Kelvin (35 Celsius) 00:17:05.018 [Child] Cleaning up... 00:17:05.018 [Child] 0000:d8:00.0: Current Temperature: 308 Kelvin (35 Celsius) 00:17:05.018 Asynchronous Event Request test 00:17:05.018 Attached to 0000:d8:00.0 00:17:05.018 Reset controller to setup AER completions for this process 00:17:05.018 Registering asynchronous event callbacks... 00:17:05.018 Getting orig temperature thresholds of all controllers 00:17:05.018 0000:d8:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:05.018 Setting all controllers temperature threshold low to trigger AER 00:17:05.018 Waiting for all controllers temperature threshold to be set lower 00:17:05.018 Waiting for all controllers to trigger AER and reset threshold 00:17:05.018 0000:d8:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:05.018 aer_cb - Resetting Temp Threshold for device: 0000:d8:00.0 00:17:05.018 0000:d8:00.0: Current Temperature: 308 Kelvin (35 Celsius) 00:17:05.018 Cleaning up... 00:17:05.018 00:17:05.018 real 0m4.709s 00:17:05.018 user 0m3.640s 00:17:05.018 sys 0m1.761s 00:17:05.018 13:48:36 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.018 13:48:36 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:17:05.018 ************************************ 00:17:05.018 END TEST nvme_multi_aen 00:17:05.018 ************************************ 00:17:05.018 13:48:36 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000 00:17:05.018 13:48:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:05.018 13:48:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.018 13:48:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.275 ************************************ 00:17:05.275 START TEST nvme_startup 00:17:05.275 ************************************ 00:17:05.275 13:48:36 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000 00:17:05.533 Initializing NVMe Controllers 00:17:05.533 Attached to 0000:d8:00.0 00:17:05.533 Initialization complete. 00:17:05.533 Time used:277845.500 (us). 00:17:05.533 00:17:05.533 real 0m0.326s 00:17:05.533 user 0m0.098s 00:17:05.533 sys 0m0.178s 00:17:05.533 13:48:36 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.533 13:48:36 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:17:05.533 ************************************ 00:17:05.533 END TEST nvme_startup 00:17:05.533 ************************************ 00:17:05.533 13:48:36 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:17:05.533 13:48:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.533 13:48:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.533 13:48:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.533 ************************************ 00:17:05.533 START TEST nvme_multi_secondary 00:17:05.533 ************************************ 00:17:05.533 13:48:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:17:05.533 13:48:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=3891687 00:17:05.533 13:48:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:17:05.533 13:48:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=3891688 00:17:05.533 13:48:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:17:05.533 13:48:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:17:09.714 Initializing NVMe Controllers 00:17:09.714 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:09.714 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 1 00:17:09.714 Initialization complete. Launching workers. 00:17:09.714 ======================================================== 00:17:09.714 Latency(us) 00:17:09.714 Device Information : IOPS MiB/s Average min max 00:17:09.714 PCIE (0000:d8:00.0) NSID 1 from core 1: 73697.00 287.88 216.89 34.94 5694.82 00:17:09.714 ======================================================== 00:17:09.714 Total : 73697.00 287.88 216.89 34.94 5694.82 00:17:09.714 00:17:09.714 Initializing NVMe Controllers 00:17:09.714 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:09.714 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 2 00:17:09.714 Initialization complete. Launching workers. 00:17:09.714 ======================================================== 00:17:09.714 Latency(us) 00:17:09.714 Device Information : IOPS MiB/s Average min max 00:17:09.714 PCIE (0000:d8:00.0) NSID 1 from core 2: 39786.60 155.42 401.75 27.11 6900.71 00:17:09.714 ======================================================== 00:17:09.714 Total : 39786.60 155.42 401.75 27.11 6900.71 00:17:09.714 00:17:09.714 13:48:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 3891687 00:17:11.087 Initializing NVMe Controllers 00:17:11.087 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:11.087 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:17:11.087 Initialization complete. Launching workers. 00:17:11.087 ======================================================== 00:17:11.087 Latency(us) 00:17:11.087 Device Information : IOPS MiB/s Average min max 00:17:11.087 PCIE (0000:d8:00.0) NSID 1 from core 0: 77303.94 301.97 206.66 51.00 3394.86 00:17:11.087 ======================================================== 00:17:11.088 Total : 77303.94 301.97 206.66 51.00 3394.86 00:17:11.088 00:17:11.088 13:48:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 3891688 00:17:11.088 13:48:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=3892395 00:17:11.088 13:48:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:17:11.088 13:48:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=3892396 00:17:11.088 13:48:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:17:11.088 13:48:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:17:14.366 Initializing NVMe Controllers 00:17:14.366 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:14.366 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 1 00:17:14.366 Initialization complete. Launching workers. 00:17:14.366 ======================================================== 00:17:14.366 Latency(us) 00:17:14.366 Device Information : IOPS MiB/s Average min max 00:17:14.366 PCIE (0000:d8:00.0) NSID 1 from core 1: 83143.67 324.78 192.23 23.09 1864.05 00:17:14.366 ======================================================== 00:17:14.366 Total : 83143.67 324.78 192.23 23.09 1864.05 00:17:14.366 00:17:14.623 Initializing NVMe Controllers 00:17:14.623 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:14.623 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:17:14.623 Initialization complete. Launching workers. 00:17:14.623 ======================================================== 00:17:14.623 Latency(us) 00:17:14.623 Device Information : IOPS MiB/s Average min max 00:17:14.623 PCIE (0000:d8:00.0) NSID 1 from core 0: 77435.85 302.48 206.31 28.34 1847.03 00:17:14.623 ======================================================== 00:17:14.623 Total : 77435.85 302.48 206.31 28.34 1847.03 00:17:14.623 00:17:16.521 Initializing NVMe Controllers 00:17:16.521 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:16.521 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 2 00:17:16.521 Initialization complete. Launching workers. 00:17:16.521 ======================================================== 00:17:16.521 Latency(us) 00:17:16.521 Device Information : IOPS MiB/s Average min max 00:17:16.521 PCIE (0000:d8:00.0) NSID 1 from core 2: 43859.99 171.33 364.33 24.42 5551.10 00:17:16.521 ======================================================== 00:17:16.521 Total : 43859.99 171.33 364.33 24.42 5551.10 00:17:16.521 00:17:16.521 13:48:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 3892395 00:17:16.521 13:48:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 3892396 00:17:16.521 00:17:16.521 real 0m10.721s 00:17:16.521 user 0m18.482s 00:17:16.521 sys 0m1.053s 00:17:16.521 13:48:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.521 13:48:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:17:16.521 ************************************ 00:17:16.521 END TEST nvme_multi_secondary 00:17:16.521 ************************************ 00:17:16.521 13:48:47 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:17:16.521 13:48:47 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:17:16.521 13:48:47 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/3884146 ]] 00:17:16.521 13:48:47 nvme -- common/autotest_common.sh@1094 -- # kill 3884146 00:17:16.521 13:48:47 nvme -- common/autotest_common.sh@1095 -- # wait 3884146 00:17:16.521 [2024-12-05 13:48:47.739726] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3890944) is not found. Dropping the request. 00:17:16.521 [2024-12-05 13:48:47.739814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3890944) is not found. Dropping the request. 00:17:16.521 [2024-12-05 13:48:47.739853] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3890944) is not found. Dropping the request. 00:17:16.521 [2024-12-05 13:48:47.739901] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 3890944) is not found. Dropping the request. 00:17:17.087 [2024-12-05 13:48:48.600567] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:17:20.374 13:48:51 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:17:20.374 13:48:51 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:17:20.374 13:48:51 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:17:20.374 13:48:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:20.374 13:48:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.374 13:48:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:20.374 ************************************ 00:17:20.374 START TEST bdev_nvme_reset_stuck_adm_cmd 00:17:20.374 ************************************ 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:17:20.374 * Looking for test storage... 00:17:20.374 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:20.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.374 --rc genhtml_branch_coverage=1 00:17:20.374 --rc genhtml_function_coverage=1 00:17:20.374 --rc genhtml_legend=1 00:17:20.374 --rc geninfo_all_blocks=1 00:17:20.374 --rc geninfo_unexecuted_blocks=1 00:17:20.374 00:17:20.374 ' 00:17:20.374 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:20.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.374 --rc genhtml_branch_coverage=1 00:17:20.374 --rc genhtml_function_coverage=1 00:17:20.374 --rc genhtml_legend=1 00:17:20.374 --rc geninfo_all_blocks=1 00:17:20.374 --rc geninfo_unexecuted_blocks=1 00:17:20.374 00:17:20.374 ' 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.375 --rc genhtml_branch_coverage=1 00:17:20.375 --rc genhtml_function_coverage=1 00:17:20.375 --rc genhtml_legend=1 00:17:20.375 --rc geninfo_all_blocks=1 00:17:20.375 --rc geninfo_unexecuted_blocks=1 00:17:20.375 00:17:20.375 ' 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.375 --rc genhtml_branch_coverage=1 00:17:20.375 --rc genhtml_function_coverage=1 00:17:20.375 --rc genhtml_legend=1 00:17:20.375 --rc geninfo_all_blocks=1 00:17:20.375 --rc geninfo_unexecuted_blocks=1 00:17:20.375 00:17:20.375 ' 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:d8:00.0 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:d8:00.0 ']' 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=3893713 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0xF 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 3893713 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 3893713 ']' 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.375 13:48:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:20.634 [2024-12-05 13:48:51.937013] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:17:20.634 [2024-12-05 13:48:51.937092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893713 ] 00:17:20.634 [2024-12-05 13:48:52.083276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.634 [2024-12-05 13:48:52.147147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.634 [2024-12-05 13:48:52.147233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.634 [2024-12-05 13:48:52.147335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.634 [2024-12-05 13:48:52.147340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.892 [2024-12-05 13:48:52.371353] 'OCF_Core' volume operations registered 00:17:20.893 [2024-12-05 13:48:52.371393] 'OCF_Cache' volume operations registered 00:17:20.893 [2024-12-05 13:48:52.375843] 'OCF Composite' volume operations registered 00:17:20.893 [2024-12-05 13:48:52.380301] 'SPDK_block_device' volume operations registered 00:17:21.459 13:48:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.459 13:48:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:17:21.459 13:48:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:d8:00.0 00:17:21.459 13:48:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.459 13:48:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:24.736 nvme0n1 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_m08oW.txt 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:24.736 true 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733402935 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=3894248 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:17:24.736 13:48:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:26.748 [2024-12-05 13:48:57.830619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:17:26.748 [2024-12-05 13:48:57.830880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.748 [2024-12-05 13:48:57.830902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.748 [2024-12-05 13:48:57.830919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.748 [2024-12-05 13:48:57.832111] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:d8:00.0, 0] Resetting controller successful. 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 3894248 00:17:26.748 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 3894248 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 3894248 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.748 13:48:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_m08oW.txt 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_m08oW.txt 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 3893713 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 3893713 ']' 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 3893713 00:17:30.085 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3893713 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3893713' 00:17:30.343 killing process with pid 3893713 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 3893713 00:17:30.343 13:49:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 3893713 00:17:30.910 13:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:17:30.910 13:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:17:30.910 00:17:30.910 real 0m10.597s 00:17:30.910 user 0m39.346s 00:17:30.910 sys 0m1.235s 00:17:30.910 13:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.910 13:49:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:30.910 ************************************ 00:17:30.910 END TEST bdev_nvme_reset_stuck_adm_cmd 00:17:30.910 ************************************ 00:17:30.910 13:49:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:17:30.910 13:49:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:17:30.910 13:49:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.910 13:49:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.910 13:49:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:30.910 ************************************ 00:17:30.910 START TEST nvme_fio 00:17:30.910 ************************************ 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:30.910 13:49:02 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:d8:00.0') 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' 00:17:30.910 13:49:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:17:37.470 13:49:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' 00:17:37.470 13:49:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:17:44.028 13:49:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:17:44.028 13:49:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.d8.00.0' --bs=4096 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.d8.00.0' --bs=4096 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:44.028 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.283 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:44.283 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:44.283 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:44.283 13:49:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.d8.00.0' --bs=4096 00:17:44.541 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:44.541 fio-3.35 00:17:44.541 Starting 1 thread 00:17:54.506 00:17:54.506 test: (groupid=0, jobs=1): err= 0: pid=3897376: Thu Dec 5 13:49:24 2024 00:17:54.506 read: IOPS=74.5k, BW=291MiB/s (305MB/s)(582MiB/2001msec) 00:17:54.506 slat (nsec): min=3104, max=58267, avg=3540.68, stdev=684.36 00:17:54.506 clat (usec): min=142, max=1895, avg=844.38, stdev=39.21 00:17:54.506 lat (usec): min=146, max=1899, avg=847.92, stdev=39.32 00:17:54.506 clat percentiles (usec): 00:17:54.506 | 1.00th=[ 783], 5.00th=[ 799], 10.00th=[ 807], 20.00th=[ 824], 00:17:54.506 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[ 848], 60.00th=[ 848], 00:17:54.506 | 70.00th=[ 857], 80.00th=[ 865], 90.00th=[ 873], 95.00th=[ 881], 00:17:54.506 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 1319], 99.95th=[ 1582], 00:17:54.506 | 99.99th=[ 1860] 00:17:54.506 bw ( KiB/s): min=291400, max=310040, per=100.00%, avg=299445.33, stdev=9577.93, samples=3 00:17:54.506 iops : min=72850, max=77510, avg=74861.33, stdev=2394.48, samples=3 00:17:54.506 write: IOPS=74.6k, BW=291MiB/s (305MB/s)(583MiB/2001msec); 0 zone resets 00:17:54.506 slat (nsec): min=3135, max=97485, avg=3915.86, stdev=778.40 00:17:54.507 clat (usec): min=135, max=1893, avg=844.92, stdev=39.55 00:17:54.507 lat (usec): min=139, max=1898, avg=848.84, stdev=39.70 00:17:54.507 clat percentiles (usec): 00:17:54.507 | 1.00th=[ 783], 5.00th=[ 799], 10.00th=[ 807], 20.00th=[ 824], 00:17:54.507 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[ 848], 60.00th=[ 848], 00:17:54.507 | 70.00th=[ 865], 80.00th=[ 873], 90.00th=[ 873], 95.00th=[ 881], 00:17:54.507 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 1319], 99.95th=[ 1631], 00:17:54.507 | 99.99th=[ 1827] 00:17:54.507 bw ( KiB/s): min=291712, max=306416, per=100.00%, avg=298693.33, stdev=7379.98, samples=3 00:17:54.507 iops : min=72928, max=76604, avg=74673.33, stdev=1844.99, samples=3 00:17:54.507 lat (usec) : 250=0.01%, 500=0.01%, 750=0.06%, 1000=99.67% 00:17:54.507 lat (msec) : 2=0.25% 00:17:54.507 cpu : usr=99.50%, sys=0.05%, ctx=18, majf=0, minf=6 00:17:54.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:54.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.507 issued rwts: total=149058,149241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.507 00:17:54.507 Run status group 0 (all jobs): 00:17:54.507 READ: bw=291MiB/s (305MB/s), 291MiB/s-291MiB/s (305MB/s-305MB/s), io=582MiB (611MB), run=2001-2001msec 00:17:54.507 WRITE: bw=291MiB/s (305MB/s), 291MiB/s-291MiB/s (305MB/s-305MB/s), io=583MiB (611MB), run=2001-2001msec 00:17:54.507 13:49:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:54.507 13:49:24 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:17:54.507 00:17:54.507 real 0m22.362s 00:17:54.507 user 0m20.255s 00:17:54.507 sys 0m2.943s 00:17:54.507 13:49:24 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.507 13:49:24 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:17:54.507 ************************************ 00:17:54.507 END TEST nvme_fio 00:17:54.507 ************************************ 00:17:54.507 00:17:54.507 real 1m47.211s 00:17:54.507 user 4m2.592s 00:17:54.507 sys 0m19.463s 00:17:54.507 13:49:24 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.507 13:49:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.507 ************************************ 00:17:54.507 END TEST nvme 00:17:54.507 ************************************ 00:17:54.507 13:49:24 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:17:54.507 13:49:24 -- spdk/autotest.sh@217 -- # run_test nvme_scc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh 00:17:54.507 13:49:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:54.507 13:49:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.507 13:49:24 -- common/autotest_common.sh@10 -- # set +x 00:17:54.507 ************************************ 00:17:54.507 START TEST nvme_scc 00:17:54.507 ************************************ 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh 00:17:54.507 * Looking for test storage... 00:17:54.507 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@345 -- # : 1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@368 -- # return 0 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.507 --rc genhtml_branch_coverage=1 00:17:54.507 --rc genhtml_function_coverage=1 00:17:54.507 --rc genhtml_legend=1 00:17:54.507 --rc geninfo_all_blocks=1 00:17:54.507 --rc geninfo_unexecuted_blocks=1 00:17:54.507 00:17:54.507 ' 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.507 --rc genhtml_branch_coverage=1 00:17:54.507 --rc genhtml_function_coverage=1 00:17:54.507 --rc genhtml_legend=1 00:17:54.507 --rc geninfo_all_blocks=1 00:17:54.507 --rc geninfo_unexecuted_blocks=1 00:17:54.507 00:17:54.507 ' 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.507 --rc genhtml_branch_coverage=1 00:17:54.507 --rc genhtml_function_coverage=1 00:17:54.507 --rc genhtml_legend=1 00:17:54.507 --rc geninfo_all_blocks=1 00:17:54.507 --rc geninfo_unexecuted_blocks=1 00:17:54.507 00:17:54.507 ' 00:17:54.507 13:49:24 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.507 --rc genhtml_branch_coverage=1 00:17:54.507 --rc genhtml_function_coverage=1 00:17:54.507 --rc genhtml_legend=1 00:17:54.507 --rc geninfo_all_blocks=1 00:17:54.507 --rc geninfo_unexecuted_blocks=1 00:17:54.507 00:17:54.507 ' 00:17:54.507 13:49:24 nvme_scc -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.507 13:49:24 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.507 13:49:24 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.507 13:49:24 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.507 13:49:24 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.507 13:49:24 nvme_scc -- paths/export.sh@5 -- # export PATH 00:17:54.507 13:49:24 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:17:54.507 13:49:24 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:17:54.508 13:49:24 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:17:54.508 13:49:24 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:17:54.508 13:49:24 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:17:54.508 13:49:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:17:54.508 13:49:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:17:54.508 13:49:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ ............................... == QEMU ]] 00:17:54.508 13:49:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:17:54.508 00:17:54.508 real 0m0.221s 00:17:54.508 user 0m0.123s 00:17:54.508 sys 0m0.112s 00:17:54.508 13:49:24 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.508 13:49:24 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:17:54.508 ************************************ 00:17:54.508 END TEST nvme_scc 00:17:54.508 ************************************ 00:17:54.508 13:49:24 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:17:54.508 13:49:24 -- spdk/autotest.sh@222 -- # [[ 1 -eq 1 ]] 00:17:54.508 13:49:24 -- spdk/autotest.sh@223 -- # run_test nvme_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh 00:17:54.508 13:49:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:54.508 13:49:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.508 13:49:24 -- common/autotest_common.sh@10 -- # set +x 00:17:54.508 ************************************ 00:17:54.508 START TEST nvme_cuse 00:17:54.508 ************************************ 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh 00:17:54.508 * Looking for test storage... 00:17:54.508 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@344 -- # case "$op" in 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@345 -- # : 1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@365 -- # decimal 1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@353 -- # local d=1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@355 -- # echo 1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@366 -- # decimal 2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@353 -- # local d=2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@355 -- # echo 2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.508 13:49:25 nvme_cuse -- scripts/common.sh@368 -- # return 0 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.508 --rc genhtml_branch_coverage=1 00:17:54.508 --rc genhtml_function_coverage=1 00:17:54.508 --rc genhtml_legend=1 00:17:54.508 --rc geninfo_all_blocks=1 00:17:54.508 --rc geninfo_unexecuted_blocks=1 00:17:54.508 00:17:54.508 ' 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.508 --rc genhtml_branch_coverage=1 00:17:54.508 --rc genhtml_function_coverage=1 00:17:54.508 --rc genhtml_legend=1 00:17:54.508 --rc geninfo_all_blocks=1 00:17:54.508 --rc geninfo_unexecuted_blocks=1 00:17:54.508 00:17:54.508 ' 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.508 --rc genhtml_branch_coverage=1 00:17:54.508 --rc genhtml_function_coverage=1 00:17:54.508 --rc genhtml_legend=1 00:17:54.508 --rc geninfo_all_blocks=1 00:17:54.508 --rc geninfo_unexecuted_blocks=1 00:17:54.508 00:17:54.508 ' 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.508 --rc genhtml_branch_coverage=1 00:17:54.508 --rc genhtml_function_coverage=1 00:17:54.508 --rc genhtml_legend=1 00:17:54.508 --rc geninfo_all_blocks=1 00:17:54.508 --rc geninfo_unexecuted_blocks=1 00:17:54.508 00:17:54.508 ' 00:17:54.508 13:49:25 nvme_cuse -- cuse/nvme_cuse.sh@11 -- # uname 00:17:54.508 13:49:25 nvme_cuse -- cuse/nvme_cuse.sh@11 -- # [[ Linux != \L\i\n\u\x ]] 00:17:54.508 13:49:25 nvme_cuse -- cuse/nvme_cuse.sh@16 -- # modprobe cuse 00:17:54.508 13:49:25 nvme_cuse -- cuse/nvme_cuse.sh@17 -- # run_test nvme_cuse_app /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.508 13:49:25 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:17:54.508 ************************************ 00:17:54.508 START TEST nvme_cuse_app 00:17:54.508 ************************************ 00:17:54.508 13:49:25 nvme_cuse.nvme_cuse_app -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse 00:17:54.508 00:17:54.508 00:17:54.508 CUnit - A unit testing framework for C - Version 2.1-3 00:17:54.508 http://cunit.sourceforge.net/ 00:17:54.508 00:17:54.508 00:17:54.508 Suite: nvme_cuse 00:17:55.075 Test: test_cuse_update ...passed 00:17:55.075 00:17:55.075 Run Summary: Type Total Ran Passed Failed Inactive 00:17:55.075 suites 1 1 n/a 0 0 00:17:55.075 tests 1 1 1 0 0 00:17:55.075 asserts 28 28 28 0 n/a 00:17:55.075 00:17:55.075 Elapsed time = 0.073 seconds 00:17:55.075 00:17:55.075 real 0m1.029s 00:17:55.075 user 0m0.018s 00:17:55.075 sys 0m0.070s 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_app -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_app -- common/autotest_common.sh@10 -- # set +x 00:17:55.075 ************************************ 00:17:55.075 END TEST nvme_cuse_app 00:17:55.075 ************************************ 00:17:55.075 13:49:26 nvme_cuse -- cuse/nvme_cuse.sh@18 -- # run_test nvme_cuse_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh 00:17:55.075 13:49:26 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:55.075 13:49:26 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.075 13:49:26 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:17:55.075 ************************************ 00:17:55.075 START TEST nvme_cuse_rpc 00:17:55.075 ************************************ 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh 00:17:55.075 * Looking for test storage... 00:17:55.075 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@345 -- # : 1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@353 -- # local d=1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@355 -- # echo 1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@353 -- # local d=2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@355 -- # echo 2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@368 -- # return 0 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.075 --rc genhtml_branch_coverage=1 00:17:55.075 --rc genhtml_function_coverage=1 00:17:55.075 --rc genhtml_legend=1 00:17:55.075 --rc geninfo_all_blocks=1 00:17:55.075 --rc geninfo_unexecuted_blocks=1 00:17:55.075 00:17:55.075 ' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.075 --rc genhtml_branch_coverage=1 00:17:55.075 --rc genhtml_function_coverage=1 00:17:55.075 --rc genhtml_legend=1 00:17:55.075 --rc geninfo_all_blocks=1 00:17:55.075 --rc geninfo_unexecuted_blocks=1 00:17:55.075 00:17:55.075 ' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.075 --rc genhtml_branch_coverage=1 00:17:55.075 --rc genhtml_function_coverage=1 00:17:55.075 --rc genhtml_legend=1 00:17:55.075 --rc geninfo_all_blocks=1 00:17:55.075 --rc geninfo_unexecuted_blocks=1 00:17:55.075 00:17:55.075 ' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.075 --rc genhtml_branch_coverage=1 00:17:55.075 --rc genhtml_function_coverage=1 00:17:55.075 --rc genhtml_legend=1 00:17:55.075 --rc geninfo_all_blocks=1 00:17:55.075 --rc geninfo_unexecuted_blocks=1 00:17:55.075 00:17:55.075 ' 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@13 -- # get_first_nvme_bdf 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:55.075 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@13 -- # bdf=0000:d8:00.0 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@14 -- # ctrlr_base=/dev/spdk/nvme 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@17 -- # spdk_tgt_pid=3898651 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@18 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@20 -- # waitforlisten 3898651 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@835 -- # '[' -z 3898651 ']' 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.333 13:49:26 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:17:55.333 [2024-12-05 13:49:26.730175] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:17:55.333 [2024-12-05 13:49:26.730248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898651 ] 00:17:55.333 [2024-12-05 13:49:26.849211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:55.591 [2024-12-05 13:49:26.905274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.591 [2024-12-05 13:49:26.905281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.849 [2024-12-05 13:49:27.115001] 'OCF_Core' volume operations registered 00:17:55.849 [2024-12-05 13:49:27.115032] 'OCF_Cache' volume operations registered 00:17:55.849 [2024-12-05 13:49:27.119151] 'OCF Composite' volume operations registered 00:17:55.849 [2024-12-05 13:49:27.123299] 'SPDK_block_device' volume operations registered 00:17:55.849 13:49:27 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.849 13:49:27 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:55.849 13:49:27 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:17:59.132 Nvme0n1 00:17:59.132 13:49:30 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:17:59.132 [2024-12-05 13:49:30.561754] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:17:59.132 [2024-12-05 13:49:30.561802] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:17:59.132 [2024-12-05 13:49:30.561931] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:17:59.132 [2024-12-05 13:49:30.561973] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:17:59.132 13:49:30 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@25 -- # sleep 5 00:18:04.403 13:49:35 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@27 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:18:04.403 13:49:35 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:18:04.403 [ 00:18:04.403 { 00:18:04.403 "name": "Nvme0n1", 00:18:04.403 "aliases": [ 00:18:04.403 "8cb203f6-61bd-4ce1-a81e-c75ab1e450b4" 00:18:04.403 ], 00:18:04.403 "product_name": "NVMe disk", 00:18:04.403 "block_size": 512, 00:18:04.403 "num_blocks": 7814037168, 00:18:04.403 "uuid": "8cb203f6-61bd-4ce1-a81e-c75ab1e450b4", 00:18:04.403 "numa_id": 1, 00:18:04.403 "assigned_rate_limits": { 00:18:04.403 "rw_ios_per_sec": 0, 00:18:04.403 "rw_mbytes_per_sec": 0, 00:18:04.403 "r_mbytes_per_sec": 0, 00:18:04.403 "w_mbytes_per_sec": 0 00:18:04.403 }, 00:18:04.403 "claimed": false, 00:18:04.403 "zoned": false, 00:18:04.403 "supported_io_types": { 00:18:04.403 "read": true, 00:18:04.403 "write": true, 00:18:04.403 "unmap": true, 00:18:04.403 "flush": true, 00:18:04.403 "reset": true, 00:18:04.403 "nvme_admin": true, 00:18:04.403 "nvme_io": true, 00:18:04.403 "nvme_io_md": false, 00:18:04.403 "write_zeroes": true, 00:18:04.403 "zcopy": false, 00:18:04.403 "get_zone_info": false, 00:18:04.403 "zone_management": false, 00:18:04.403 "zone_append": false, 00:18:04.403 "compare": false, 00:18:04.403 "compare_and_write": false, 00:18:04.403 "abort": true, 00:18:04.403 "seek_hole": false, 00:18:04.403 "seek_data": false, 00:18:04.403 "copy": false, 00:18:04.403 "nvme_iov_md": false 00:18:04.403 }, 00:18:04.403 "driver_specific": { 00:18:04.403 "nvme": [ 00:18:04.403 { 00:18:04.403 "pci_address": "0000:d8:00.0", 00:18:04.403 "trid": { 00:18:04.403 "trtype": "PCIe", 00:18:04.403 "traddr": "0000:d8:00.0" 00:18:04.403 }, 00:18:04.403 "cuse_device": "spdk/nvme0n1", 00:18:04.403 "ctrlr_data": { 00:18:04.403 "cntlid": 0, 00:18:04.403 "vendor_id": "0x8086", 00:18:04.403 "model_number": "INTEL SSDPE2KX040T8", 00:18:04.403 "serial_number": "BTLJ8234018V4P0DGN", 00:18:04.403 "firmware_revision": "VDV1Y295", 00:18:04.403 "oacs": { 00:18:04.403 "security": 0, 00:18:04.403 "format": 1, 00:18:04.403 "firmware": 1, 00:18:04.403 "ns_manage": 1 00:18:04.403 }, 00:18:04.403 "multi_ctrlr": false, 00:18:04.403 "ana_reporting": false 00:18:04.403 }, 00:18:04.403 "vs": { 00:18:04.403 "nvme_version": "1.2" 00:18:04.403 }, 00:18:04.403 "ns_data": { 00:18:04.403 "id": 1, 00:18:04.403 "can_share": false 00:18:04.403 } 00:18:04.403 } 00:18:04.403 ], 00:18:04.403 "mp_policy": "active_passive" 00:18:04.403 } 00:18:04.403 } 00:18:04.403 ] 00:18:04.404 13:49:35 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:18:04.660 [ 00:18:04.660 { 00:18:04.660 "name": "Nvme0", 00:18:04.660 "ctrlrs": [ 00:18:04.660 { 00:18:04.660 "state": "enabled", 00:18:04.660 "cuse_device": "spdk/nvme0", 00:18:04.660 "trid": { 00:18:04.660 "trtype": "PCIe", 00:18:04.660 "traddr": "0000:d8:00.0" 00:18:04.660 }, 00:18:04.660 "cntlid": 0, 00:18:04.660 "host": { 00:18:04.660 "nqn": "nqn.2014-08.org.nvmexpress:uuid:4c291b8d-6145-45df-b71e-74726740b2e6", 00:18:04.660 "addr": "", 00:18:04.660 "svcid": "" 00:18:04.660 }, 00:18:04.660 "numa_id": 1 00:18:04.660 } 00:18:04.660 ] 00:18:04.660 } 00:18:04.660 ] 00:18:04.660 13:49:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0 00:18:04.917 13:49:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@35 -- # sleep 1 00:18:05.850 [2024-12-05 13:49:37.070934] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:18:06.108 13:49:37 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@36 -- # '[' -c /dev/spdk/nvme0 ']' 00:18:06.108 13:49:37 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0 00:18:06.367 [2024-12-05 13:49:37.653070] nvme_cuse.c:1471:spdk_nvme_cuse_unregister: *ERROR*: Cannot find associated CUSE device 00:18:06.367 request: 00:18:06.367 { 00:18:06.367 "name": "Nvme0", 00:18:06.367 "method": "bdev_nvme_cuse_unregister", 00:18:06.367 "req_id": 1 00:18:06.367 } 00:18:06.367 Got JSON-RPC error response 00:18:06.367 response: 00:18:06.367 { 00:18:06.367 "code": -19, 00:18:06.367 "message": "No such device" 00:18:06.367 } 00:18:06.367 13:49:37 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:06.626 [2024-12-05 13:49:37.924197] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:18:06.626 [2024-12-05 13:49:37.924233] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:18:06.626 [2024-12-05 13:49:37.924358] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:18:06.626 [2024-12-05 13:49:37.924398] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:18:06.626 13:49:37 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@44 -- # sleep 1 00:18:07.559 13:49:38 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@46 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:18:07.559 13:49:38 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:07.817 [2024-12-05 13:49:39.118248] bdev_nvme_cuse_rpc.c: 57:rpc_nvme_cuse_register: *ERROR*: Failed to register CUSE devices: File exists 00:18:07.817 request: 00:18:07.817 { 00:18:07.817 "name": "Nvme0", 00:18:07.817 "method": "bdev_nvme_cuse_register", 00:18:07.817 "req_id": 1 00:18:07.817 } 00:18:07.817 Got JSON-RPC error response 00:18:07.817 response: 00:18:07.817 { 00:18:07.817 "code": -17, 00:18:07.817 "message": "File exists" 00:18:07.817 } 00:18:07.817 13:49:39 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@52 -- # sleep 1 00:18:08.749 13:49:40 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@54 -- # '[' -c /dev/spdk/nvme1 ']' 00:18:08.749 13:49:40 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:09.679 [2024-12-05 13:49:40.932222] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@60 -- # trap - SIGINT SIGTERM EXIT 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@61 -- # killprocess 3898651 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@954 -- # '[' -z 3898651 ']' 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@958 -- # kill -0 3898651 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@959 -- # uname 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3898651 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3898651' 00:18:12.963 killing process with pid 3898651 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@973 -- # kill 3898651 00:18:12.963 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@978 -- # wait 3898651 00:18:13.222 00:18:13.222 real 0m18.285s 00:18:13.222 user 0m36.324s 00:18:13.222 sys 0m1.397s 00:18:13.222 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.222 13:49:44 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.222 ************************************ 00:18:13.222 END TEST nvme_cuse_rpc 00:18:13.222 ************************************ 00:18:13.222 13:49:44 nvme_cuse -- cuse/nvme_cuse.sh@19 -- # run_test nvme_cli_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh 00:18:13.222 13:49:44 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.222 13:49:44 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.222 13:49:44 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:13.222 ************************************ 00:18:13.222 START TEST nvme_cli_cuse 00:18:13.222 ************************************ 00:18:13.222 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh 00:18:13.481 * Looking for test storage... 00:18:13.481 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@344 -- # case "$op" in 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@345 -- # : 1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@365 -- # decimal 1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@353 -- # local d=1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@355 -- # echo 1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@366 -- # decimal 2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@353 -- # local d=2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@355 -- # echo 2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@368 -- # return 0 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.481 --rc genhtml_branch_coverage=1 00:18:13.481 --rc genhtml_function_coverage=1 00:18:13.481 --rc genhtml_legend=1 00:18:13.481 --rc geninfo_all_blocks=1 00:18:13.481 --rc geninfo_unexecuted_blocks=1 00:18:13.481 00:18:13.481 ' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.481 --rc genhtml_branch_coverage=1 00:18:13.481 --rc genhtml_function_coverage=1 00:18:13.481 --rc genhtml_legend=1 00:18:13.481 --rc geninfo_all_blocks=1 00:18:13.481 --rc geninfo_unexecuted_blocks=1 00:18:13.481 00:18:13.481 ' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.481 --rc genhtml_branch_coverage=1 00:18:13.481 --rc genhtml_function_coverage=1 00:18:13.481 --rc genhtml_legend=1 00:18:13.481 --rc geninfo_all_blocks=1 00:18:13.481 --rc geninfo_unexecuted_blocks=1 00:18:13.481 00:18:13.481 ' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.481 --rc genhtml_branch_coverage=1 00:18:13.481 --rc genhtml_function_coverage=1 00:18:13.481 --rc genhtml_legend=1 00:18:13.481 --rc geninfo_all_blocks=1 00:18:13.481 --rc geninfo_unexecuted_blocks=1 00:18:13.481 00:18:13.481 ' 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.481 13:49:44 nvme_cuse.nvme_cli_cuse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- paths/export.sh@5 -- # export PATH 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@10 -- # ctrls=() 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@10 -- # declare -A ctrls 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@11 -- # nvmes=() 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@11 -- # declare -A nvmes 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@12 -- # bdfs=() 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@12 -- # declare -A bdfs 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@14 -- # nvme_name= 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@10 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@11 -- # mkdir /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@13 -- # KERNEL_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@14 -- # CUSE_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@16 -- # NVME_CMD=/usr/local/src/nvme-cli/nvme 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@17 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:13.482 13:49:44 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@19 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:18:16.764 Waiting for block devices as requested 00:18:16.764 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:16.764 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:17.022 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:17.022 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:17.022 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:17.280 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:17.281 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:17.281 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:17.538 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:17.538 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:17.538 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:17.796 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:17.796 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:17.796 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:18.054 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:18.054 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:18.054 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@20 -- # scan_nvme_ctrls 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@49 -- # pci=0000:d8:00.0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@50 -- # pci_can_use 0000:d8:00.0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@18 -- # local i 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@21 -- # [[ =~ 0000:d8:00.0 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@27 -- # return 0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@18 -- # shift 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n BTLJ8234018V4P0DGN ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ8234018V4P0DGN "' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ8234018V4P0DGN ' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX040T8 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8 "' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8 ' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n VDV1Y295 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV1Y295"' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fr]=VDV1Y295 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.989 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x989680 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe4e1c0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x18 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[frmw]=0x18 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:18:18.990 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.991 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.252 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n - ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@18 -- # shift 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x1d1c0beb0"' 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x1d1c0beb0 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:19.253 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x1d1c0beb0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x1d1c0beb0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x1d1c0beb0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x1d1c0beb0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="1"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=1 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[flbas]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[mc]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[dpc]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="4,000,787,030,016"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=4,000,787,030,016 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[mssrl]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[mcl]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[msrc]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:18:19.254 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 01000000d91400000000000000000000 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="01000000d91400000000000000000000"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nguid]=01000000d91400000000000000000000 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 000000000000d914 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="000000000000d914"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[eui64]=000000000000d914 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@18 -- # shift 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:18:19.255 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 01000000d91400000000000000000000 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000d91400000000000000000000"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000d91400000000000000000000 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 000000000000d914 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="000000000000d914"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[eui64]=000000000000d914 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:d8:00.0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@22 -- # get_nvme_with_ns_management 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@155 -- # local _ctrls 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@157 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@157 -- # get_nvmes_with_ns_management 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@144 -- # (( 1 == 0 )) 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@146 -- # local ctrl 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}" 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@122 -- # local -A bits 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@125 -- # bits["ss/sr"]=1 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@126 -- # bits["fnvme"]=2 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@127 -- # bits["fc/fi"]=4 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@128 -- # bits["nsmgt"]=8 00:18:19.256 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@129 -- # bits["self-test"]=16 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@130 -- # bits["directives"]=32 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@132 -- # bits["virtmgt"]=128 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@134 -- # bits["getlba"]=512 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@137 -- # bit=nsmgt 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@138 -- # [[ -n 8 ]] 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@75 -- # [[ -n 0xe ]] 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@76 -- # echo 0xe 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] )) 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@148 -- # echo nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@151 -- # return 0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@158 -- # (( 1 > 0 )) 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@159 -- # echo nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@160 -- # return 0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@22 -- # nvme_name=nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@27 -- # sel_cmd=() 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@29 -- # get_oncs nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@75 -- # [[ -n 0x6 ]] 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@76 -- # echo 0x6 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@29 -- # (( 0x6 & 1 << 4 )) 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@33 -- # ctrlr=/dev/nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@34 -- # ns=/dev/nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@35 -- # bdf=0000:d8:00.0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@37 -- # waitforblk nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1239 -- # local i=0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1250 -- # return 0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # grep oacs 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # cut -d: -f2 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # oacs=' 0xe' 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@40 -- # oacs_firmware=4 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@43 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@44 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/nvme0n1 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@46 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@47 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@48 -- # '[' 4 -ne 0 ']' 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@49 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@51 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/nvme0 00:18:19.257 Smart Log for NVME device:nvme0 namespace-id:ffffffff 00:18:19.257 critical_warning : 0 00:18:19.257 temperature : 36 °C (309 K) 00:18:19.257 available_spare : 100% 00:18:19.257 available_spare_threshold : 10% 00:18:19.257 percentage_used : 6% 00:18:19.257 endurance group critical warning summary: 0 00:18:19.257 Data Units Read : 103,469,196 (52.98 TB) 00:18:19.257 Data Units Written : 227,640,082 (116.55 TB) 00:18:19.257 host_read_commands : 6,942,809,715 00:18:19.257 host_write_commands : 8,109,758,873 00:18:19.257 controller_busy_time : 604 00:18:19.257 power_cycles : 97 00:18:19.257 power_on_hours : 39,056 00:18:19.257 unsafe_shutdowns : 77 00:18:19.257 media_errors : 0 00:18:19.257 num_err_log_entries : 7,941 00:18:19.257 Warning Temperature Time : 474 00:18:19.257 Critical Composite Temperature Time : 0 00:18:19.257 Thermal Management T1 Trans Count : 0 00:18:19.257 Thermal Management T2 Trans Count : 0 00:18:19.257 Thermal Management T1 Total Time : 0 00:18:19.257 Thermal Management T2 Total Time : 0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@52 -- # /usr/local/src/nvme-cli/nvme error-log /dev/nvme0 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@53 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/nvme0 -f 1 -l 100 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@54 -- # /usr/local/src/nvme-cli/nvme get-log /dev/nvme0 -i 1 -l 100 00:18:19.257 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:18:19.515 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@59 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/nvme0 -n 1 -f 2 -v 0 00:18:19.515 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@59 -- # true 00:18:19.515 13:49:50 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:18:22.801 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:22.801 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:22.801 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:23.059 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:23.059 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:23.060 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:26.378 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@64 -- # spdk_tgt_pid=3904093 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@65 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@67 -- # waitforlisten 3904093 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@835 -- # '[' -z 3904093 ']' 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.315 13:49:58 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:27.315 [2024-12-05 13:49:58.835016] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:18:27.315 [2024-12-05 13:49:58.835087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904093 ] 00:18:27.574 [2024-12-05 13:49:58.959283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:27.574 [2024-12-05 13:49:59.016903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.574 [2024-12-05 13:49:59.016909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.832 [2024-12-05 13:49:59.224201] 'OCF_Core' volume operations registered 00:18:27.832 [2024-12-05 13:49:59.224234] 'OCF_Cache' volume operations registered 00:18:27.832 [2024-12-05 13:49:59.228568] 'OCF Composite' volume operations registered 00:18:27.832 [2024-12-05 13:49:59.232932] 'SPDK_block_device' volume operations registered 00:18:28.091 13:49:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.091 13:49:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@868 -- # return 0 00:18:28.091 13:49:59 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:18:31.370 Nvme0n1 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@70 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:31.370 [2024-12-05 13:50:02.785879] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:18:31.370 [2024-12-05 13:50:02.785930] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:18:31.370 [2024-12-05 13:50:02.786066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:18:31.370 [2024-12-05 13:50:02.786108] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@72 -- # ctrlr=/dev/spdk/nvme0 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@73 -- # ns=/dev/spdk/nvme0n1 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@74 -- # waitforfile /dev/spdk/nvme0n1 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1269 -- # local i=0 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/spdk/nvme0n1 ']' 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1276 -- # '[' '!' -e /dev/spdk/nvme0n1 ']' 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1280 -- # return 0 00:18:31.370 13:50:02 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:18:31.630 [ 00:18:31.630 { 00:18:31.630 "name": "Nvme0n1", 00:18:31.630 "aliases": [ 00:18:31.630 "8c5324b1-bfb8-45c7-9486-601ad09938d0" 00:18:31.630 ], 00:18:31.630 "product_name": "NVMe disk", 00:18:31.630 "block_size": 512, 00:18:31.630 "num_blocks": 7814037168, 00:18:31.630 "uuid": "8c5324b1-bfb8-45c7-9486-601ad09938d0", 00:18:31.630 "numa_id": 1, 00:18:31.630 "assigned_rate_limits": { 00:18:31.630 "rw_ios_per_sec": 0, 00:18:31.630 "rw_mbytes_per_sec": 0, 00:18:31.630 "r_mbytes_per_sec": 0, 00:18:31.630 "w_mbytes_per_sec": 0 00:18:31.630 }, 00:18:31.630 "claimed": false, 00:18:31.630 "zoned": false, 00:18:31.630 "supported_io_types": { 00:18:31.630 "read": true, 00:18:31.630 "write": true, 00:18:31.630 "unmap": true, 00:18:31.630 "flush": true, 00:18:31.630 "reset": true, 00:18:31.630 "nvme_admin": true, 00:18:31.630 "nvme_io": true, 00:18:31.630 "nvme_io_md": false, 00:18:31.630 "write_zeroes": true, 00:18:31.630 "zcopy": false, 00:18:31.630 "get_zone_info": false, 00:18:31.630 "zone_management": false, 00:18:31.630 "zone_append": false, 00:18:31.630 "compare": false, 00:18:31.630 "compare_and_write": false, 00:18:31.630 "abort": true, 00:18:31.630 "seek_hole": false, 00:18:31.630 "seek_data": false, 00:18:31.630 "copy": false, 00:18:31.630 "nvme_iov_md": false 00:18:31.630 }, 00:18:31.630 "driver_specific": { 00:18:31.630 "nvme": [ 00:18:31.630 { 00:18:31.630 "pci_address": "0000:d8:00.0", 00:18:31.630 "trid": { 00:18:31.630 "trtype": "PCIe", 00:18:31.630 "traddr": "0000:d8:00.0" 00:18:31.630 }, 00:18:31.630 "cuse_device": "spdk/nvme0n1", 00:18:31.630 "ctrlr_data": { 00:18:31.630 "cntlid": 0, 00:18:31.630 "vendor_id": "0x8086", 00:18:31.630 "model_number": "INTEL SSDPE2KX040T8", 00:18:31.630 "serial_number": "BTLJ8234018V4P0DGN", 00:18:31.630 "firmware_revision": "VDV1Y295", 00:18:31.630 "oacs": { 00:18:31.630 "security": 0, 00:18:31.630 "format": 1, 00:18:31.630 "firmware": 1, 00:18:31.630 "ns_manage": 1 00:18:31.630 }, 00:18:31.630 "multi_ctrlr": false, 00:18:31.630 "ana_reporting": false 00:18:31.630 }, 00:18:31.630 "vs": { 00:18:31.630 "nvme_version": "1.2" 00:18:31.630 }, 00:18:31.630 "ns_data": { 00:18:31.630 "id": 1, 00:18:31.630 "can_share": false 00:18:31.630 } 00:18:31.630 } 00:18:31.630 ], 00:18:31.630 "mp_policy": "active_passive" 00:18:31.630 } 00:18:31.630 } 00:18:31.630 ] 00:18:31.630 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:18:31.959 [ 00:18:31.959 { 00:18:31.959 "name": "Nvme0", 00:18:31.959 "ctrlrs": [ 00:18:31.959 { 00:18:31.959 "state": "enabled", 00:18:31.959 "cuse_device": "spdk/nvme0", 00:18:31.959 "trid": { 00:18:31.959 "trtype": "PCIe", 00:18:31.959 "traddr": "0000:d8:00.0" 00:18:31.959 }, 00:18:31.959 "cntlid": 0, 00:18:31.959 "host": { 00:18:31.959 "nqn": "nqn.2014-08.org.nvmexpress:uuid:7ae7050a-8315-45df-851f-c8b037ce726d", 00:18:31.959 "addr": "", 00:18:31.959 "svcid": "" 00:18:31.959 }, 00:18:31.959 "numa_id": 1 00:18:31.959 } 00:18:31.959 ] 00:18:31.959 } 00:18:31.959 ] 00:18:31.959 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@79 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/spdk/nvme0n1 00:18:31.959 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@80 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/spdk/nvme0n1 00:18:31.959 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@81 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/spdk/nvme0n1 00:18:31.959 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@83 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/spdk/nvme0 00:18:31.959 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@84 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/spdk/nvme0 00:18:32.278 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@85 -- # '[' 4 -ne 0 ']' 00:18:32.278 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@86 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/spdk/nvme0 00:18:32.278 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@88 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/spdk/nvme0 00:18:32.278 Smart Log for NVME device:nvme0 namespace-id:ffffffff 00:18:32.278 critical_warning : 0 00:18:32.278 temperature : 36 °C (309 K) 00:18:32.278 available_spare : 100% 00:18:32.278 available_spare_threshold : 10% 00:18:32.278 percentage_used : 6% 00:18:32.278 endurance group critical warning summary: 0 00:18:32.278 Data Units Read : 103,469,199 (52.98 TB) 00:18:32.278 Data Units Written : 227,640,082 (116.55 TB) 00:18:32.278 host_read_commands : 6,942,809,770 00:18:32.279 host_write_commands : 8,109,758,873 00:18:32.279 controller_busy_time : 604 00:18:32.279 power_cycles : 97 00:18:32.279 power_on_hours : 39,056 00:18:32.279 unsafe_shutdowns : 77 00:18:32.279 media_errors : 0 00:18:32.279 num_err_log_entries : 7,941 00:18:32.279 Warning Temperature Time : 474 00:18:32.279 Critical Composite Temperature Time : 0 00:18:32.279 Thermal Management T1 Trans Count : 0 00:18:32.279 Thermal Management T2 Trans Count : 0 00:18:32.279 Thermal Management T1 Total Time : 0 00:18:32.279 Thermal Management T2 Total Time : 0 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@89 -- # /usr/local/src/nvme-cli/nvme error-log /dev/spdk/nvme0 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@90 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/spdk/nvme0 -f 1 -l 100 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@91 -- # /usr/local/src/nvme-cli/nvme get-log /dev/spdk/nvme0 -i 1 -l 100 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@92 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:18:32.279 [2024-12-05 13:50:03.558410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@93 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/spdk/nvme0 -n 1 -f 2 -v 0 00:18:32.279 [2024-12-05 13:50:03.578465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:186 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:32.279 [2024-12-05 13:50:03.578496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT NAMESPACE SPECIFIC (01/0f) qid:0 cid:186 cdw0:0 sqhd:000d p:1 m:0 dnr:1 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@93 -- # true 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11 ']' 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@102 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@105 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:18:32.279 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@105 -- # jq '.[].block_size' 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@105 -- # bs=512 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@106 -- # head -c512 /dev/urandom 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme write /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file 00:18:32.556 write: Success 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@108 -- # /usr/local/src/nvme-cli/nvme read /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file 00:18:32.556 read: Success 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@109 -- # cmp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@110 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@114 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 5 --cdw10=0x3ff0003 --cdw11=0x1 -r 00:18:32.556 Admin Command Create I/O Completion Queue is Success and result: 0x00000000 00:18:32.556 13:50:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@115 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 4 --cdw10=0x3 00:18:32.556 Admin Command Delete I/O Completion Queue is Success and result: 0x00000000 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@117 -- # [[ -c /dev/spdk/nvme0 ]] 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@118 -- # [[ -c /dev/spdk/nvme0n1 ]] 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@120 -- # trap - SIGINT SIGTERM EXIT 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@121 -- # killprocess 3904093 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@954 -- # '[' -z 3904093 ']' 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@958 -- # kill -0 3904093 00:18:32.556 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@959 -- # uname 00:18:32.557 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.557 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3904093 00:18:32.815 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.815 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.815 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3904093' 00:18:32.815 killing process with pid 3904093 00:18:32.815 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@973 -- # kill 3904093 00:18:32.815 13:50:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@978 -- # wait 3904093 00:18:33.758 [2024-12-05 13:50:05.011465] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:18:37.036 00:18:37.036 real 0m23.539s 00:18:37.036 user 0m21.858s 00:18:37.036 sys 0m7.743s 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:37.036 ************************************ 00:18:37.036 END TEST nvme_cli_cuse 00:18:37.036 ************************************ 00:18:37.036 13:50:08 nvme_cuse -- cuse/nvme_cuse.sh@20 -- # run_test nvme_cli_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh 00:18:37.036 13:50:08 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:37.036 13:50:08 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.036 13:50:08 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:37.036 ************************************ 00:18:37.036 START TEST nvme_cli_plugin 00:18:37.036 ************************************ 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh 00:18:37.036 * Looking for test storage... 00:18:37.036 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1711 -- # lcov --version 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@344 -- # case "$op" in 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@345 -- # : 1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@365 -- # decimal 1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@353 -- # local d=1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@355 -- # echo 1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@366 -- # decimal 2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@353 -- # local d=2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@355 -- # echo 2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@368 -- # return 0 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:37.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.036 --rc genhtml_branch_coverage=1 00:18:37.036 --rc genhtml_function_coverage=1 00:18:37.036 --rc genhtml_legend=1 00:18:37.036 --rc geninfo_all_blocks=1 00:18:37.036 --rc geninfo_unexecuted_blocks=1 00:18:37.036 00:18:37.036 ' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:37.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.036 --rc genhtml_branch_coverage=1 00:18:37.036 --rc genhtml_function_coverage=1 00:18:37.036 --rc genhtml_legend=1 00:18:37.036 --rc geninfo_all_blocks=1 00:18:37.036 --rc geninfo_unexecuted_blocks=1 00:18:37.036 00:18:37.036 ' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:37.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.036 --rc genhtml_branch_coverage=1 00:18:37.036 --rc genhtml_function_coverage=1 00:18:37.036 --rc genhtml_legend=1 00:18:37.036 --rc geninfo_all_blocks=1 00:18:37.036 --rc geninfo_unexecuted_blocks=1 00:18:37.036 00:18:37.036 ' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:37.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.036 --rc genhtml_branch_coverage=1 00:18:37.036 --rc genhtml_function_coverage=1 00:18:37.036 --rc genhtml_legend=1 00:18:37.036 --rc geninfo_all_blocks=1 00:18:37.036 --rc geninfo_unexecuted_blocks=1 00:18:37.036 00:18:37.036 ' 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.036 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- paths/export.sh@5 -- # export PATH 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@10 -- # ctrls=() 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@10 -- # declare -A ctrls 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@11 -- # nvmes=() 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@11 -- # declare -A nvmes 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@12 -- # bdfs=() 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@12 -- # declare -A bdfs 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@14 -- # nvme_name= 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@11 -- # trap 'killprocess $spdk_tgt_pid; "$rootdir/scripts/setup.sh" reset' EXIT 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@28 -- # kernel_out=() 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@29 -- # cuse_out=() 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@31 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@36 -- # export PCI_BLOCKED= 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@36 -- # PCI_BLOCKED= 00:18:37.037 13:50:08 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:18:40.344 Waiting for block devices as requested 00:18:40.344 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:40.602 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:40.602 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:40.602 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:40.887 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:40.887 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:40.887 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:41.147 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:41.147 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:41.147 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:41.407 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:41.407 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:41.407 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:41.665 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:41.665 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:41.665 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:41.922 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@39 -- # scan_nvme_ctrls 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@49 -- # pci=0000:d8:00.0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@50 -- # pci_can_use 0000:d8:00.0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@18 -- # local i 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@21 -- # [[ =~ 0000:d8:00.0 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- scripts/common.sh@27 -- # return 0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@18 -- # shift 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n BTLJ8234018V4P0DGN ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ8234018V4P0DGN "' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ8234018V4P0DGN ' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX040T8 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8 "' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8 ' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n VDV1Y295 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV1Y295"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[fr]=VDV1Y295 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.858 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x989680 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0xe4e1c0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x18 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[frmw]=0x18 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:18:42.859 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.860 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n - ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@18 -- # shift 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:18:42.861 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x1d1c0beb0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x1d1c0beb0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x1d1c0beb0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x1d1c0beb0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x1d1c0beb0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x1d1c0beb0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="1"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=1 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[flbas]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[mc]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[dpc]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="4,000,787,030,016"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=4,000,787,030,016 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[mssrl]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[mcl]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[msrc]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:18:42.862 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 01000000d91400000000000000000000 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="01000000d91400000000000000000000"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[nguid]=01000000d91400000000000000000000 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 000000000000d914 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="000000000000d914"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[eui64]=000000000000d914 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@18 -- # shift 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:18:42.863 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 01000000d91400000000000000000000 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000d91400000000000000000000"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000d91400000000000000000000 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n 000000000000d914 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="000000000000d914"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[eui64]=000000000000d914 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # IFS=: 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@21 -- # read -r reg val 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:d8:00.0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@41 -- # nvme list 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@41 -- # kernel_out[0]='Node Generic SN Model Namespace Usage Format FW Rev 00:18:42.864 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:18:42.864 nvme0n1 nvme0n1 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV1Y295' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@42 -- # nvme list -v 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:42.864 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@42 -- # kernel_out[1]='Subsystem Subsystem-NQN Controllers 00:18:42.864 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:18:42.864 nvme0 nvme0 00:18:42.864 00:18:42.864 Device SN MN FR TxPort Address Subsystem Namespaces 00:18:42.864 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:18:42.864 nvme0 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 VDV1Y295 pcie 0000:d8:00.0 nvme0 nvme0n1 00:18:42.864 00:18:42.864 Device Generic NSID Usage Format Controllers 00:18:42.865 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:18:42.865 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@43 -- # nvme list -v -o json 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v -o json 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@43 -- # kernel_out[2]='{ 00:18:42.865 "Devices":[ 00:18:42.865 { 00:18:42.865 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712", 00:18:42.865 "Subsystems":[ 00:18:42.865 { 00:18:42.865 "Subsystem":"nvme0", 00:18:42.865 00:18:42.865 "Controllers":[ 00:18:42.865 { 00:18:42.865 "Controller":"nvme0", 00:18:42.865 "SerialNumber":"BTLJ8234018V4P0DGN", 00:18:42.865 "ModelNumber":"INTEL SSDPE2KX040T8", 00:18:42.865 "Firmware":"VDV1Y295", 00:18:42.865 "Transport":"pcie", 00:18:42.865 "Address":"0000:d8:00.0", 00:18:42.865 "Namespaces":[ 00:18:42.865 { 00:18:42.865 "NameSpace":"nvme0n1", 00:18:42.865 "Generic":"nvme0n1", 00:18:42.865 "NSID":1, 00:18:42.865 "UsedBytes":4000787030016, 00:18:42.865 "MaximumLBA":7814037168, 00:18:42.865 "PhysicalSize":4000787030016, 00:18:42.865 "SectorSize":512 00:18:42.865 } 00:18:42.865 ], 00:18:42.865 "Paths":[ 00:18:42.865 ] 00:18:42.865 } 00:18:42.865 ], 00:18:42.865 "Namespaces":[ 00:18:42.865 ] 00:18:42.865 } 00:18:42.865 ] 00:18:42.865 } 00:18:42.865 ] 00:18:42.865 }' 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@44 -- # nvme list-subsys 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list-subsys 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@44 -- # kernel_out[3]='nvme0 - 00:18:42.865 \ 00:18:42.865 +- nvme0 pcie 0000:d8:00.0 live' 00:18:42.865 13:50:14 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:18:46.146 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:46.146 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:49.431 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@49 -- # spdk_tgt_pid=3908503 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@48 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@51 -- # waitforlisten 3908503 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@835 -- # '[' -z 3908503 ']' 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.364 13:50:21 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:50.622 [2024-12-05 13:50:21.929444] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:18:50.622 [2024-12-05 13:50:21.929527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908503 ] 00:18:50.622 [2024-12-05 13:50:22.052539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.622 [2024-12-05 13:50:22.108249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.881 [2024-12-05 13:50:22.329439] 'OCF_Core' volume operations registered 00:18:50.881 [2024-12-05 13:50:22.329478] 'OCF_Cache' volume operations registered 00:18:50.881 [2024-12-05 13:50:22.333929] 'OCF Composite' volume operations registered 00:18:50.881 [2024-12-05 13:50:22.338421] 'SPDK_block_device' volume operations registered 00:18:51.450 13:50:22 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.450 13:50:22 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@868 -- # return 0 00:18:51.450 13:50:22 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@54 -- # for ctrl in "${ordered_ctrls[@]}" 00:18:51.450 13:50:22 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:d8:00.0 00:18:54.740 nvme0n1 00:18:54.740 13:50:25 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@56 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n nvme0 00:18:54.740 [2024-12-05 13:50:26.212989] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:18:54.740 [2024-12-05 13:50:26.213035] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:18:54.740 [2024-12-05 13:50:26.213164] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:18:54.740 [2024-12-05 13:50:26.213210] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:18:54.740 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:18:54.999 [ 00:18:54.999 { 00:18:54.999 "name": "nvme0n1", 00:18:54.999 "aliases": [ 00:18:54.999 "17a30ac8-ee5f-42ef-886f-cc1ecc0f3efb" 00:18:54.999 ], 00:18:54.999 "product_name": "NVMe disk", 00:18:54.999 "block_size": 512, 00:18:54.999 "num_blocks": 7814037168, 00:18:54.999 "uuid": "17a30ac8-ee5f-42ef-886f-cc1ecc0f3efb", 00:18:54.999 "numa_id": 1, 00:18:54.999 "assigned_rate_limits": { 00:18:54.999 "rw_ios_per_sec": 0, 00:18:54.999 "rw_mbytes_per_sec": 0, 00:18:54.999 "r_mbytes_per_sec": 0, 00:18:54.999 "w_mbytes_per_sec": 0 00:18:54.999 }, 00:18:54.999 "claimed": false, 00:18:54.999 "zoned": false, 00:18:54.999 "supported_io_types": { 00:18:54.999 "read": true, 00:18:54.999 "write": true, 00:18:54.999 "unmap": true, 00:18:54.999 "flush": true, 00:18:54.999 "reset": true, 00:18:54.999 "nvme_admin": true, 00:18:54.999 "nvme_io": true, 00:18:54.999 "nvme_io_md": false, 00:18:54.999 "write_zeroes": true, 00:18:54.999 "zcopy": false, 00:18:54.999 "get_zone_info": false, 00:18:54.999 "zone_management": false, 00:18:54.999 "zone_append": false, 00:18:54.999 "compare": false, 00:18:54.999 "compare_and_write": false, 00:18:54.999 "abort": true, 00:18:54.999 "seek_hole": false, 00:18:54.999 "seek_data": false, 00:18:54.999 "copy": false, 00:18:54.999 "nvme_iov_md": false 00:18:54.999 }, 00:18:54.999 "driver_specific": { 00:18:54.999 "nvme": [ 00:18:54.999 { 00:18:54.999 "pci_address": "0000:d8:00.0", 00:18:54.999 "trid": { 00:18:54.999 "trtype": "PCIe", 00:18:54.999 "traddr": "0000:d8:00.0" 00:18:54.999 }, 00:18:54.999 "cuse_device": "spdk/nvme0n1", 00:18:54.999 "ctrlr_data": { 00:18:54.999 "cntlid": 0, 00:18:54.999 "vendor_id": "0x8086", 00:18:54.999 "model_number": "INTEL SSDPE2KX040T8", 00:18:54.999 "serial_number": "BTLJ8234018V4P0DGN", 00:18:54.999 "firmware_revision": "VDV1Y295", 00:18:54.999 "oacs": { 00:18:54.999 "security": 0, 00:18:54.999 "format": 1, 00:18:54.999 "firmware": 1, 00:18:54.999 "ns_manage": 1 00:18:54.999 }, 00:18:54.999 "multi_ctrlr": false, 00:18:54.999 "ana_reporting": false 00:18:54.999 }, 00:18:54.999 "vs": { 00:18:54.999 "nvme_version": "1.2" 00:18:54.999 }, 00:18:54.999 "ns_data": { 00:18:54.999 "id": 1, 00:18:54.999 "can_share": false 00:18:54.999 } 00:18:54.999 } 00:18:54.999 ], 00:18:54.999 "mp_policy": "active_passive" 00:18:54.999 } 00:18:54.999 } 00:18:54.999 ] 00:18:54.999 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:18:55.317 [ 00:18:55.317 { 00:18:55.317 "name": "nvme0", 00:18:55.317 "ctrlrs": [ 00:18:55.317 { 00:18:55.317 "state": "enabled", 00:18:55.317 "cuse_device": "spdk/nvme0", 00:18:55.317 "trid": { 00:18:55.317 "trtype": "PCIe", 00:18:55.317 "traddr": "0000:d8:00.0" 00:18:55.317 }, 00:18:55.317 "cntlid": 0, 00:18:55.317 "host": { 00:18:55.317 "nqn": "nqn.2014-08.org.nvmexpress:uuid:158e51a3-fa50-4359-b250-37a2231add05", 00:18:55.317 "addr": "", 00:18:55.317 "svcid": "" 00:18:55.317 }, 00:18:55.317 "numa_id": 1 00:18:55.317 } 00:18:55.317 ] 00:18:55.318 } 00:18:55.318 ] 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@63 -- # nvme spdk list 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@63 -- # cuse_out[0]='Node Generic SN Model Namespace Usage Format FW Rev 00:18:55.318 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:18:55.318 nvme0n1 nvme0n1 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV1Y295' 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@64 -- # nvme spdk list -v 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v 00:18:55.318 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@64 -- # cuse_out[1]='Subsystem Subsystem-NQN Controllers 00:18:55.579 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:18:55.579 nvme0 nvme0 00:18:55.579 00:18:55.579 Device SN MN FR TxPort Address Subsystem Namespaces 00:18:55.579 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:18:55.579 nvme0 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 VDV1Y295 pcie 0000:d8:00.0 nvme0 nvme0n1 00:18:55.579 00:18:55.579 Device Generic NSID Usage Format Controllers 00:18:55.579 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:18:55.579 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@65 -- # nvme spdk list -v -o json 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v -o json 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@65 -- # cuse_out[2]='{ 00:18:55.579 "Devices":[ 00:18:55.579 { 00:18:55.579 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712", 00:18:55.579 "Subsystems":[ 00:18:55.579 { 00:18:55.579 "Subsystem":"nvme0", 00:18:55.579 00:18:55.579 "Controllers":[ 00:18:55.579 { 00:18:55.579 "Controller":"nvme0", 00:18:55.579 "SerialNumber":"BTLJ8234018V4P0DGN", 00:18:55.579 "ModelNumber":"INTEL SSDPE2KX040T8", 00:18:55.579 "Firmware":"VDV1Y295", 00:18:55.579 "Transport":"pcie", 00:18:55.579 "Address":"0000:d8:00.0", 00:18:55.579 "Namespaces":[ 00:18:55.579 { 00:18:55.579 "NameSpace":"nvme0n1", 00:18:55.579 "Generic":"nvme0n1", 00:18:55.579 "NSID":1, 00:18:55.579 "UsedBytes":4000787030016, 00:18:55.579 "MaximumLBA":7814037168, 00:18:55.579 "PhysicalSize":4000787030016, 00:18:55.579 "SectorSize":512 00:18:55.579 } 00:18:55.579 ], 00:18:55.579 "Paths":[ 00:18:55.579 ] 00:18:55.579 } 00:18:55.579 ], 00:18:55.579 "Namespaces":[ 00:18:55.579 ] 00:18:55.579 } 00:18:55.579 ] 00:18:55.579 } 00:18:55.579 ] 00:18:55.579 }' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@66 -- # nvme spdk list-subsys 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@66 -- # cuse_out[3]='nvme0 - 00:18:55.579 \ 00:18:55.579 +- nvme0 pcie 0000:d8:00.0 live' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@69 -- # nvme spdk list-subsys -v -o json 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys -v -o json 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@69 -- # true 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@69 -- # [[ Json output format is not supported. == \J\s\o\n\ \o\u\t\p\u\t\ \f\o\r\m\a\t\ \i\s\ \n\o\t\ \s\u\p\p\o\r\t\e\d\. ]] 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@71 -- # diff -ub /dev/fd/62 /dev/fd/61 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node Generic SN Model Namespace Usage Format FW Rev 00:18:55.579 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:18:55.579 nvme0n1 nvme0n1 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV1Y295' 'Subsystem Subsystem-NQN Controllers 00:18:55.579 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:18:55.579 nvme0 nvme0 00:18:55.579 00:18:55.579 Device SN MN FR TxPort Address Subsystem Namespaces 00:18:55.579 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:18:55.579 nvme0 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 VDV1Y295 pcie 0000:d8:00.0 nvme0 nvme0n1 00:18:55.579 00:18:55.579 Device Generic NSID Usage Format Controllers 00:18:55.579 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:18:55.579 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' '{ 00:18:55.579 "Devices":[ 00:18:55.579 { 00:18:55.579 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712", 00:18:55.579 "Subsystems":[ 00:18:55.579 { 00:18:55.579 "Subsystem":"nvme0", 00:18:55.579 00:18:55.579 "Controllers":[ 00:18:55.579 { 00:18:55.579 "Controller":"nvme0", 00:18:55.579 "SerialNumber":"BTLJ8234018V4P0DGN", 00:18:55.579 "ModelNumber":"INTEL SSDPE2KX040T8", 00:18:55.579 "Firmware":"VDV1Y295", 00:18:55.579 "Transport":"pcie", 00:18:55.579 "Address":"0000:d8:00.0", 00:18:55.579 "Namespaces":[ 00:18:55.579 { 00:18:55.579 "NameSpace":"nvme0n1", 00:18:55.579 "Generic":"nvme0n1", 00:18:55.579 "NSID":1, 00:18:55.579 "UsedBytes":4000787030016, 00:18:55.579 "MaximumLBA":7814037168, 00:18:55.579 "PhysicalSize":4000787030016, 00:18:55.579 "SectorSize":512 00:18:55.579 } 00:18:55.579 ], 00:18:55.579 "Paths":[ 00:18:55.579 ] 00:18:55.579 } 00:18:55.579 ], 00:18:55.579 "Namespaces":[ 00:18:55.579 ] 00:18:55.579 } 00:18:55.579 ] 00:18:55.579 } 00:18:55.579 ] 00:18:55.579 }' 'nvme0 - 00:18:55.579 \ 00:18:55.579 +- nvme0 pcie 0000:d8:00.0 live' 00:18:55.579 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node Generic SN Model Namespace Usage Format FW Rev 00:18:55.579 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:18:55.579 nvme0n1 nvme0n1 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV1Y295' 'Subsystem Subsystem-NQN Controllers 00:18:55.579 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:18:55.580 nvme0 nvme0 00:18:55.580 00:18:55.580 Device SN MN FR TxPort Address Subsystem Namespaces 00:18:55.580 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:18:55.580 nvme0 BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 VDV1Y295 pcie 0000:d8:00.0 nvme0 nvme0n1 00:18:55.580 00:18:55.580 Device Generic NSID Usage Format Controllers 00:18:55.580 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:18:55.580 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' '{ 00:18:55.580 "Devices":[ 00:18:55.580 { 00:18:55.580 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:804400cf-1c42-e711-906e-0012795d9712", 00:18:55.580 "Subsystems":[ 00:18:55.580 { 00:18:55.580 "Subsystem":"nvme0", 00:18:55.580 00:18:55.580 "Controllers":[ 00:18:55.580 { 00:18:55.580 "Controller":"nvme0", 00:18:55.580 "SerialNumber":"BTLJ8234018V4P0DGN", 00:18:55.580 "ModelNumber":"INTEL SSDPE2KX040T8", 00:18:55.580 "Firmware":"VDV1Y295", 00:18:55.580 "Transport":"pcie", 00:18:55.580 "Address":"0000:d8:00.0", 00:18:55.580 "Namespaces":[ 00:18:55.580 { 00:18:55.580 "NameSpace":"nvme0n1", 00:18:55.580 "Generic":"nvme0n1", 00:18:55.580 "NSID":1, 00:18:55.580 "UsedBytes":4000787030016, 00:18:55.580 "MaximumLBA":7814037168, 00:18:55.580 "PhysicalSize":4000787030016, 00:18:55.580 "SectorSize":512 00:18:55.580 } 00:18:55.580 ], 00:18:55.580 "Paths":[ 00:18:55.580 ] 00:18:55.580 } 00:18:55.580 ], 00:18:55.580 "Namespaces":[ 00:18:55.580 ] 00:18:55.580 } 00:18:55.580 ] 00:18:55.580 } 00:18:55.580 ] 00:18:55.580 }' 'nvme0 - 00:18:55.580 \ 00:18:55.580 +- nvme0 pcie 0000:d8:00.0 live' 00:18:55.580 13:50:26 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@1 -- # killprocess 3908503 00:18:55.580 13:50:26 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@954 -- # '[' -z 3908503 ']' 00:18:55.580 13:50:26 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@958 -- # kill -0 3908503 00:18:55.580 13:50:26 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@959 -- # uname 00:18:55.580 13:50:26 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.580 13:50:26 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3908503 00:18:55.580 13:50:27 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.580 13:50:27 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.580 13:50:27 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3908503' 00:18:55.580 killing process with pid 3908503 00:18:55.580 13:50:27 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@973 -- # kill 3908503 00:18:55.580 13:50:27 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@978 -- # wait 3908503 00:18:56.518 [2024-12-05 13:50:27.970038] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:18:59.811 13:50:31 nvme_cuse.nvme_cli_plugin -- cuse/spdk_nvme_cli_plugin.sh@1 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:19:03.104 Waiting for block devices as requested 00:19:03.104 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:03.104 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:03.104 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:03.363 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:03.363 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:03.363 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:03.623 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:03.623 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:03.623 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:03.882 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:03.882 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:03.882 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:04.140 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:04.141 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:04.141 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:04.399 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:04.399 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:19:05.338 00:19:05.338 real 0m28.401s 00:19:05.338 user 0m13.994s 00:19:05.338 sys 0m10.378s 00:19:05.338 13:50:36 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.338 13:50:36 nvme_cuse.nvme_cli_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 ************************************ 00:19:05.338 END TEST nvme_cli_plugin 00:19:05.338 ************************************ 00:19:05.338 13:50:36 nvme_cuse -- cuse/nvme_cuse.sh@21 -- # run_test nvme_smartctl_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh 00:19:05.338 13:50:36 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.338 13:50:36 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.338 13:50:36 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:19:05.338 ************************************ 00:19:05.338 START TEST nvme_smartctl_cuse 00:19:05.338 ************************************ 00:19:05.338 13:50:36 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh 00:19:05.598 * Looking for test storage... 00:19:05.598 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:19:05.598 13:50:36 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.598 13:50:36 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.598 13:50:36 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@344 -- # case "$op" in 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@345 -- # : 1 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@365 -- # decimal 1 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@353 -- # local d=1 00:19:05.598 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@355 -- # echo 1 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@366 -- # decimal 2 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@353 -- # local d=2 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@355 -- # echo 2 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- scripts/common.sh@368 -- # return 0 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.599 --rc genhtml_branch_coverage=1 00:19:05.599 --rc genhtml_function_coverage=1 00:19:05.599 --rc genhtml_legend=1 00:19:05.599 --rc geninfo_all_blocks=1 00:19:05.599 --rc geninfo_unexecuted_blocks=1 00:19:05.599 00:19:05.599 ' 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.599 --rc genhtml_branch_coverage=1 00:19:05.599 --rc genhtml_function_coverage=1 00:19:05.599 --rc genhtml_legend=1 00:19:05.599 --rc geninfo_all_blocks=1 00:19:05.599 --rc geninfo_unexecuted_blocks=1 00:19:05.599 00:19:05.599 ' 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.599 --rc genhtml_branch_coverage=1 00:19:05.599 --rc genhtml_function_coverage=1 00:19:05.599 --rc genhtml_legend=1 00:19:05.599 --rc geninfo_all_blocks=1 00:19:05.599 --rc geninfo_unexecuted_blocks=1 00:19:05.599 00:19:05.599 ' 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.599 --rc genhtml_branch_coverage=1 00:19:05.599 --rc genhtml_function_coverage=1 00:19:05.599 --rc genhtml_legend=1 00:19:05.599 --rc geninfo_all_blocks=1 00:19:05.599 --rc geninfo_unexecuted_blocks=1 00:19:05.599 00:19:05.599 ' 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@11 -- # SMARTCTL_CMD='smartctl -d nvme' 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@12 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:19:05.599 13:50:37 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:19:09.792 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:09.792 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:09.792 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:09.792 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:09.792 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:09.793 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:13.091 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@16 -- # get_first_nvme_bdf 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1509 -- # local bdfs 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1498 -- # local bdfs 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@16 -- # bdf=0000:d8:00.0 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@18 -- # PCI_ALLOWED=0000:d8:00.0 00:19:13.658 13:50:45 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:19:16.947 0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7 00:19:16.947 0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6 00:19:16.947 0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5 00:19:16.947 0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4 00:19:16.947 0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3 00:19:16.947 0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2 00:19:16.947 0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1 00:19:16.947 0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0 00:19:16.947 0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7 00:19:16.947 0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6 00:19:16.947 0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5 00:19:16.947 0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4 00:19:16.947 0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3 00:19:16.947 0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2 00:19:16.947 0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1 00:19:16.947 0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0 00:19:16.947 Waiting for block devices as requested 00:19:16.947 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@19 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@19 -- # nvme_name=nvme0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@20 -- # [[ -z nvme0 ]] 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@25 -- # smartctl -d nvme --json=g -a /dev/nvme0 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@25 -- # sort 00:19:17.516 13:50:48 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@25 -- # grep -v /dev/nvme0 00:19:17.516 13:50:49 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@25 -- # KERNEL_SMART_JSON='json = {}; 00:19:17.516 json.device = {}; 00:19:17.516 json.device.protocol = "NVMe"; 00:19:17.516 json.device.type = "nvme"; 00:19:17.516 json.firmware_version = "VDV1Y295"; 00:19:17.516 json.json_format_version = []; 00:19:17.516 json.json_format_version[0] = 1; 00:19:17.516 json.json_format_version[1] = 0; 00:19:17.516 json.local_time = {}; 00:19:17.516 json.local_time.asctime = "Thu Dec 5 13:50:49 2024 CET"; 00:19:17.516 json.local_time.time_t = 1733403049; 00:19:17.516 json.model_name = "INTEL SSDPE2KX040T8"; 00:19:17.516 json.nvme_controller_id = 0; 00:19:17.516 json.nvme_error_information_log = {}; 00:19:17.516 json.nvme_error_information_log.read = 16; 00:19:17.516 json.nvme_error_information_log.size = 64; 00:19:17.516 json.nvme_error_information_log.table = []; 00:19:17.516 json.nvme_error_information_log.table[0] = {}; 00:19:17.516 json.nvme_error_information_log.table[0].error_count = 7941; 00:19:17.516 json.nvme_error_information_log.table[0].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[0].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[0].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[0].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:19:17.516 json.nvme_error_information_log.table[1] = {}; 00:19:17.516 json.nvme_error_information_log.table[10] = {}; 00:19:17.516 json.nvme_error_information_log.table[10].error_count = 7931; 00:19:17.516 json.nvme_error_information_log.table[10].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[10].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[10].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[10].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:19:17.516 json.nvme_error_information_log.table[11] = {}; 00:19:17.516 json.nvme_error_information_log.table[11].error_count = 7930; 00:19:17.516 json.nvme_error_information_log.table[11].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[11].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[11].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[11].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:19:17.516 json.nvme_error_information_log.table[12] = {}; 00:19:17.516 json.nvme_error_information_log.table[12].error_count = 7929; 00:19:17.516 json.nvme_error_information_log.table[12].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[12].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[12].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[12].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:19:17.516 json.nvme_error_information_log.table[13] = {}; 00:19:17.516 json.nvme_error_information_log.table[13].error_count = 7928; 00:19:17.516 json.nvme_error_information_log.table[13].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[13].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[13].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[13].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:19:17.516 json.nvme_error_information_log.table[14] = {}; 00:19:17.516 json.nvme_error_information_log.table[14].error_count = 7927; 00:19:17.516 json.nvme_error_information_log.table[14].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[14].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[14].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[14].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:19:17.516 json.nvme_error_information_log.table[15] = {}; 00:19:17.516 json.nvme_error_information_log.table[15].error_count = 7926; 00:19:17.516 json.nvme_error_information_log.table[15].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[15].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[15].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[15].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:19:17.516 json.nvme_error_information_log.table[1].error_count = 7940; 00:19:17.516 json.nvme_error_information_log.table[1].lba = {}; 00:19:17.516 json.nvme_error_information_log.table[1].lba.value = 0; 00:19:17.516 json.nvme_error_information_log.table[1].phase_tag = false; 00:19:17.516 json.nvme_error_information_log.table[1].status_field = {}; 00:19:17.516 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:19:17.516 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:19:17.516 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:19:17.516 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:19:17.516 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:19:17.516 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:19:17.516 json.nvme_error_information_log.table[2] = {}; 00:19:17.517 json.nvme_error_information_log.table[2].error_count = 7939; 00:19:17.517 json.nvme_error_information_log.table[2].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[2].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[2].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[2].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:19:17.517 json.nvme_error_information_log.table[3] = {}; 00:19:17.517 json.nvme_error_information_log.table[3].error_count = 7938; 00:19:17.517 json.nvme_error_information_log.table[3].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[3].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[3].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[3].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:19:17.517 json.nvme_error_information_log.table[4] = {}; 00:19:17.517 json.nvme_error_information_log.table[4].error_count = 7937; 00:19:17.517 json.nvme_error_information_log.table[4].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[4].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[4].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[4].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:19:17.517 json.nvme_error_information_log.table[5] = {}; 00:19:17.517 json.nvme_error_information_log.table[5].error_count = 7936; 00:19:17.517 json.nvme_error_information_log.table[5].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[5].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[5].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[5].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:19:17.517 json.nvme_error_information_log.table[6] = {}; 00:19:17.517 json.nvme_error_information_log.table[6].error_count = 7935; 00:19:17.517 json.nvme_error_information_log.table[6].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[6].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[6].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[6].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:19:17.517 json.nvme_error_information_log.table[7] = {}; 00:19:17.517 json.nvme_error_information_log.table[7].error_count = 7934; 00:19:17.517 json.nvme_error_information_log.table[7].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[7].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[7].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[7].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:19:17.517 json.nvme_error_information_log.table[8] = {}; 00:19:17.517 json.nvme_error_information_log.table[8].error_count = 7933; 00:19:17.517 json.nvme_error_information_log.table[8].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[8].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[8].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[8].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:19:17.517 json.nvme_error_information_log.table[9] = {}; 00:19:17.517 json.nvme_error_information_log.table[9].error_count = 7932; 00:19:17.517 json.nvme_error_information_log.table[9].lba = {}; 00:19:17.517 json.nvme_error_information_log.table[9].lba.value = 0; 00:19:17.517 json.nvme_error_information_log.table[9].phase_tag = false; 00:19:17.517 json.nvme_error_information_log.table[9].status_field = {}; 00:19:17.517 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:19:17.517 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:19:17.517 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:19:17.517 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:19:17.517 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:19:17.517 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:19:17.517 json.nvme_error_information_log.unread = 48; 00:19:17.517 json.nvme_ieee_oui_identifier = 6083300; 00:19:17.517 json.nvme_number_of_namespaces = 128; 00:19:17.517 json.nvme_pci_vendor = {}; 00:19:17.517 json.nvme_pci_vendor.id = 32902; 00:19:17.517 json.nvme_pci_vendor.subsystem_id = 32902; 00:19:17.517 json.nvme_smart_health_information_log = {}; 00:19:17.517 json.nvme_smart_health_information_log.available_spare = 100; 00:19:17.517 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:19:17.517 json.nvme_smart_health_information_log.controller_busy_time = 604; 00:19:17.517 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:19:17.517 json.nvme_smart_health_information_log.critical_warning = 0; 00:19:17.517 json.nvme_smart_health_information_log.data_units_read = 103469209; 00:19:17.517 json.nvme_smart_health_information_log.data_units_written = 227640082; 00:19:17.517 json.nvme_smart_health_information_log.host_reads = 6942810030; 00:19:17.517 json.nvme_smart_health_information_log.host_writes = 8109758874; 00:19:17.517 json.nvme_smart_health_information_log.media_errors = 0; 00:19:17.517 json.nvme_smart_health_information_log.num_err_log_entries = 7941; 00:19:17.517 json.nvme_smart_health_information_log.percentage_used = 6; 00:19:17.517 json.nvme_smart_health_information_log.power_cycles = 97; 00:19:17.517 json.nvme_smart_health_information_log.power_on_hours = 39056; 00:19:17.517 json.nvme_smart_health_information_log.temperature = 36; 00:19:17.517 json.nvme_smart_health_information_log.unsafe_shutdowns = 77; 00:19:17.517 json.nvme_smart_health_information_log.warning_temp_time = 474; 00:19:17.517 json.nvme_total_capacity = 4000787030016; 00:19:17.517 json.nvme_unallocated_capacity = 0; 00:19:17.517 json.nvme_version = {}; 00:19:17.517 json.nvme_version.string = "1.2"; 00:19:17.517 json.nvme_version.value = 66048; 00:19:17.517 json.power_cycle_count = 97; 00:19:17.517 json.power_on_time = {}; 00:19:17.517 json.power_on_time.hours = 39056; 00:19:17.517 json.serial_number = "BTLJ8234018V4P0DGN"; 00:19:17.517 json.smartctl = {}; 00:19:17.517 json.smartctl.argv = []; 00:19:17.517 json.smartctl.argv[0] = "smartctl"; 00:19:17.517 json.smartctl.argv[1] = "-d"; 00:19:17.517 json.smartctl.argv[2] = "nvme"; 00:19:17.517 json.smartctl.argv[3] = "--json=g"; 00:19:17.517 json.smartctl.argv[4] = "-a"; 00:19:17.517 json.smartctl.build_info = "(local build)"; 00:19:17.517 json.smartctl.exit_status = 0; 00:19:17.517 json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64"; 00:19:17.517 json.smartctl.pre_release = false; 00:19:17.517 json.smartctl.svn_revision = "5530"; 00:19:17.517 json.smartctl.version = []; 00:19:17.517 json.smartctl.version[0] = 7; 00:19:17.517 json.smartctl.version[1] = 4; 00:19:17.517 json.smart_status = {}; 00:19:17.517 json.smart_status.nvme = {}; 00:19:17.517 json.smart_status.nvme.value = 0; 00:19:17.517 json.smart_status.passed = true; 00:19:17.517 json.smart_support = {}; 00:19:17.517 json.smart_support.available = true; 00:19:17.517 json.smart_support.enabled = true; 00:19:17.517 json.temperature = {}; 00:19:17.517 json.temperature.current = 36;' 00:19:17.517 13:50:49 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@27 -- # smartctl -d nvme -i /dev/nvme0n1 00:19:17.517 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:17.517 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:17.517 00:19:17.777 === START OF INFORMATION SECTION === 00:19:17.777 Model Number: INTEL SSDPE2KX040T8 00:19:17.777 Serial Number: BTLJ8234018V4P0DGN 00:19:17.777 Firmware Version: VDV1Y295 00:19:17.777 PCI Vendor/Subsystem ID: 0x8086 00:19:17.777 IEEE OUI Identifier: 0x5cd2e4 00:19:17.777 Total NVM Capacity: 4,000,787,030,016 [4.00 TB] 00:19:17.777 Unallocated NVM Capacity: 0 00:19:17.777 Controller ID: 0 00:19:17.777 NVMe Version: 1.2 00:19:17.777 Number of Namespaces: 128 00:19:17.777 Namespace 1 Size/Capacity: 4,000,787,030,016 [4.00 TB] 00:19:17.777 Namespace 1 Formatted LBA Size: 512 00:19:17.777 Namespace 1 IEEE EUI-64: 000000 000000d914 00:19:17.777 Local Time is: Thu Dec 5 13:50:49 2024 CET 00:19:17.777 00:19:17.777 13:50:49 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@30 -- # smartctl -d nvme -l error /dev/nvme0 00:19:17.777 13:50:49 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@30 -- # KERNEL_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:17.777 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:17.777 00:19:17.777 === START OF SMART DATA SECTION === 00:19:17.777 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:19:17.777 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:19:17.777 0 7941 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 1 7940 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 2 7939 0 - 0xc00c - 0 - - Internal Error 00:19:17.777 3 7938 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 4 7937 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 5 7936 0 - 0xc00c - 0 - - Internal Error 00:19:17.777 6 7935 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 7 7934 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 8 7933 0 - 0xc00c - 0 - - Internal Error 00:19:17.777 9 7932 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 10 7931 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 11 7930 0 - 0xc00c - 0 - - Internal Error 00:19:17.777 12 7929 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 13 7928 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 14 7927 0 - 0xc00c - 0 - - Internal Error 00:19:17.777 15 7926 2 - 0xc00c - 0 - - Internal Error 00:19:17.777 ... (48 entries not read)' 00:19:17.777 13:50:49 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:19:21.068 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:19:21.068 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:19:24.357 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@35 -- # spdk_tgt_pid=3915507 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@36 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@38 -- # waitforlisten 3915507 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@835 -- # '[' -z 3915507 ']' 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@10 -- # set +x 00:19:24.927 13:50:56 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:19:24.927 [2024-12-05 13:50:56.394561] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:19:24.927 [2024-12-05 13:50:56.394624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915507 ] 00:19:25.186 [2024-12-05 13:50:56.515318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:25.186 [2024-12-05 13:50:56.571860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.186 [2024-12-05 13:50:56.571867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.444 [2024-12-05 13:50:56.775363] 'OCF_Core' volume operations registered 00:19:25.444 [2024-12-05 13:50:56.775399] 'OCF_Cache' volume operations registered 00:19:25.444 [2024-12-05 13:50:56.779404] 'OCF Composite' volume operations registered 00:19:25.444 [2024-12-05 13:50:56.783470] 'SPDK_block_device' volume operations registered 00:19:25.444 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.444 13:50:56 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@868 -- # return 0 00:19:25.444 13:50:56 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:19:28.733 Nvme0n1 00:19:28.733 13:51:00 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:19:28.992 [2024-12-05 13:51:00.333435] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:19:28.992 [2024-12-05 13:51:00.333486] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:28.992 [2024-12-05 13:51:00.333624] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:19:28.992 [2024-12-05 13:51:00.333678] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:19:28.992 13:51:00 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@43 -- # sleep 5 00:19:34.413 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@45 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:19:34.413 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@49 -- # smartctl -d nvme --json=g -a /dev/spdk/nvme0 00:19:34.413 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@49 -- # grep -v /dev/spdk/nvme0 00:19:34.413 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@49 -- # sort 00:19:34.413 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@49 -- # CUSE_SMART_JSON='json = {}; 00:19:34.413 json.device = {}; 00:19:34.413 json.device.protocol = "NVMe"; 00:19:34.413 json.device.type = "nvme"; 00:19:34.413 json.firmware_version = "VDV1Y295"; 00:19:34.413 json.json_format_version = []; 00:19:34.414 json.json_format_version[0] = 1; 00:19:34.414 json.json_format_version[1] = 0; 00:19:34.414 json.local_time = {}; 00:19:34.414 json.local_time.asctime = "Thu Dec 5 13:51:05 2024 CET"; 00:19:34.414 json.local_time.time_t = 1733403065; 00:19:34.414 json.model_name = "INTEL SSDPE2KX040T8"; 00:19:34.414 json.nvme_controller_id = 0; 00:19:34.414 json.nvme_error_information_log = {}; 00:19:34.414 json.nvme_error_information_log.read = 16; 00:19:34.414 json.nvme_error_information_log.size = 64; 00:19:34.414 json.nvme_error_information_log.table = []; 00:19:34.414 json.nvme_error_information_log.table[0] = {}; 00:19:34.414 json.nvme_error_information_log.table[0].error_count = 7941; 00:19:34.414 json.nvme_error_information_log.table[0].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[0].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[0].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[0].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[1] = {}; 00:19:34.414 json.nvme_error_information_log.table[10] = {}; 00:19:34.414 json.nvme_error_information_log.table[10].error_count = 7931; 00:19:34.414 json.nvme_error_information_log.table[10].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[10].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[10].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[10].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[11] = {}; 00:19:34.414 json.nvme_error_information_log.table[11].error_count = 7930; 00:19:34.414 json.nvme_error_information_log.table[11].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[11].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[11].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[11].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:19:34.414 json.nvme_error_information_log.table[12] = {}; 00:19:34.414 json.nvme_error_information_log.table[12].error_count = 7929; 00:19:34.414 json.nvme_error_information_log.table[12].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[12].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[12].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[12].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[13] = {}; 00:19:34.414 json.nvme_error_information_log.table[13].error_count = 7928; 00:19:34.414 json.nvme_error_information_log.table[13].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[13].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[13].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[13].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[14] = {}; 00:19:34.414 json.nvme_error_information_log.table[14].error_count = 7927; 00:19:34.414 json.nvme_error_information_log.table[14].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[14].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[14].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[14].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:19:34.414 json.nvme_error_information_log.table[15] = {}; 00:19:34.414 json.nvme_error_information_log.table[15].error_count = 7926; 00:19:34.414 json.nvme_error_information_log.table[15].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[15].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[15].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[15].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[1].error_count = 7940; 00:19:34.414 json.nvme_error_information_log.table[1].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[1].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[1].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[1].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[2] = {}; 00:19:34.414 json.nvme_error_information_log.table[2].error_count = 7939; 00:19:34.414 json.nvme_error_information_log.table[2].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[2].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[2].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[2].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:19:34.414 json.nvme_error_information_log.table[3] = {}; 00:19:34.414 json.nvme_error_information_log.table[3].error_count = 7938; 00:19:34.414 json.nvme_error_information_log.table[3].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[3].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[3].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[3].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[4] = {}; 00:19:34.414 json.nvme_error_information_log.table[4].error_count = 7937; 00:19:34.414 json.nvme_error_information_log.table[4].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[4].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[4].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[4].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:19:34.414 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:19:34.414 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:19:34.414 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:19:34.414 json.nvme_error_information_log.table[5] = {}; 00:19:34.414 json.nvme_error_information_log.table[5].error_count = 7936; 00:19:34.414 json.nvme_error_information_log.table[5].lba = {}; 00:19:34.414 json.nvme_error_information_log.table[5].lba.value = 0; 00:19:34.414 json.nvme_error_information_log.table[5].phase_tag = false; 00:19:34.414 json.nvme_error_information_log.table[5].status_field = {}; 00:19:34.414 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:19:34.414 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:19:34.414 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:19:34.415 json.nvme_error_information_log.table[6] = {}; 00:19:34.415 json.nvme_error_information_log.table[6].error_count = 7935; 00:19:34.415 json.nvme_error_information_log.table[6].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[6].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[6].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[6].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:19:34.415 json.nvme_error_information_log.table[7] = {}; 00:19:34.415 json.nvme_error_information_log.table[7].error_count = 7934; 00:19:34.415 json.nvme_error_information_log.table[7].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[7].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[7].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[7].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:19:34.415 json.nvme_error_information_log.table[8] = {}; 00:19:34.415 json.nvme_error_information_log.table[8].error_count = 7933; 00:19:34.415 json.nvme_error_information_log.table[8].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[8].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[8].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[8].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:19:34.415 json.nvme_error_information_log.table[9] = {}; 00:19:34.415 json.nvme_error_information_log.table[9].error_count = 7932; 00:19:34.415 json.nvme_error_information_log.table[9].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[9].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[9].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[9].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:19:34.415 json.nvme_error_information_log.unread = 48; 00:19:34.415 json.nvme_ieee_oui_identifier = 6083300; 00:19:34.415 json.nvme_number_of_namespaces = 128; 00:19:34.415 json.nvme_pci_vendor = {}; 00:19:34.415 json.nvme_pci_vendor.id = 32902; 00:19:34.415 json.nvme_pci_vendor.subsystem_id = 32902; 00:19:34.415 json.nvme_smart_health_information_log = {}; 00:19:34.415 json.nvme_smart_health_information_log.available_spare = 100; 00:19:34.415 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:19:34.415 json.nvme_smart_health_information_log.controller_busy_time = 604; 00:19:34.415 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:19:34.415 json.nvme_smart_health_information_log.critical_warning = 0; 00:19:34.415 json.nvme_smart_health_information_log.data_units_read = 103469212; 00:19:34.415 json.nvme_smart_health_information_log.data_units_written = 227640082; 00:19:34.415 json.nvme_smart_health_information_log.host_reads = 6942810085; 00:19:34.415 json.nvme_smart_health_information_log.host_writes = 8109758874; 00:19:34.415 json.nvme_smart_health_information_log.media_errors = 0; 00:19:34.415 json.nvme_smart_health_information_log.num_err_log_entries = 7941; 00:19:34.415 json.nvme_smart_health_information_log.percentage_used = 6; 00:19:34.415 json.nvme_smart_health_information_log.power_cycles = 97; 00:19:34.415 json.nvme_smart_health_information_log.power_on_hours = 39056; 00:19:34.415 json.nvme_smart_health_information_log.temperature = 36; 00:19:34.415 json.nvme_smart_health_information_log.unsafe_shutdowns = 77; 00:19:34.415 json.nvme_smart_health_information_log.warning_temp_time = 474; 00:19:34.415 json.nvme_total_capacity = 4000787030016; 00:19:34.415 json.nvme_unallocated_capacity = 0; 00:19:34.415 json.nvme_version = {}; 00:19:34.415 json.nvme_version.string = "1.2"; 00:19:34.415 json.nvme_version.value = 66048; 00:19:34.415 json.power_cycle_count = 97; 00:19:34.415 json.power_on_time = {}; 00:19:34.415 json.power_on_time.hours = 39056; 00:19:34.415 json.serial_number = "BTLJ8234018V4P0DGN"; 00:19:34.415 json.smartctl = {}; 00:19:34.415 json.smartctl.argv = []; 00:19:34.415 json.smartctl.argv[0] = "smartctl"; 00:19:34.415 json.smartctl.argv[1] = "-d"; 00:19:34.415 json.smartctl.argv[2] = "nvme"; 00:19:34.415 json.smartctl.argv[3] = "--json=g"; 00:19:34.415 json.smartctl.argv[4] = "-a"; 00:19:34.415 json.smartctl.build_info = "(local build)"; 00:19:34.415 json.smartctl.exit_status = 0; 00:19:34.415 json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64"; 00:19:34.415 json.smartctl.pre_release = false; 00:19:34.415 json.smartctl.svn_revision = "5530"; 00:19:34.415 json.smartctl.version = []; 00:19:34.415 json.smartctl.version[0] = 7; 00:19:34.415 json.smartctl.version[1] = 4; 00:19:34.415 json.smart_status = {}; 00:19:34.415 json.smart_status.nvme = {}; 00:19:34.415 json.smart_status.nvme.value = 0; 00:19:34.415 json.smart_status.passed = true; 00:19:34.415 json.smart_support = {}; 00:19:34.415 json.smart_support.available = true; 00:19:34.415 json.smart_support.enabled = true; 00:19:34.415 json.temperature = {}; 00:19:34.415 json.temperature.current = 36;' 00:19:34.415 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@51 -- # diff '--changed-group-format=%<' --unchanged-group-format= /dev/fd/62 /dev/fd/61 00:19:34.415 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {}; 00:19:34.415 json.device = {}; 00:19:34.415 json.device.protocol = "NVMe"; 00:19:34.415 json.device.type = "nvme"; 00:19:34.415 json.firmware_version = "VDV1Y295"; 00:19:34.415 json.json_format_version = []; 00:19:34.415 json.json_format_version[0] = 1; 00:19:34.415 json.json_format_version[1] = 0; 00:19:34.415 json.local_time = {}; 00:19:34.415 json.local_time.asctime = "Thu Dec 5 13:50:49 2024 CET"; 00:19:34.415 json.local_time.time_t = 1733403049; 00:19:34.415 json.model_name = "INTEL SSDPE2KX040T8"; 00:19:34.415 json.nvme_controller_id = 0; 00:19:34.415 json.nvme_error_information_log = {}; 00:19:34.415 json.nvme_error_information_log.read = 16; 00:19:34.415 json.nvme_error_information_log.size = 64; 00:19:34.415 json.nvme_error_information_log.table = []; 00:19:34.415 json.nvme_error_information_log.table[0] = {}; 00:19:34.415 json.nvme_error_information_log.table[0].error_count = 7941; 00:19:34.415 json.nvme_error_information_log.table[0].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[0].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[0].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[0].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:19:34.415 json.nvme_error_information_log.table[1] = {}; 00:19:34.415 json.nvme_error_information_log.table[10] = {}; 00:19:34.415 json.nvme_error_information_log.table[10].error_count = 7931; 00:19:34.415 json.nvme_error_information_log.table[10].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[10].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[10].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[10].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:19:34.415 json.nvme_error_information_log.table[11] = {}; 00:19:34.415 json.nvme_error_information_log.table[11].error_count = 7930; 00:19:34.415 json.nvme_error_information_log.table[11].lba = {}; 00:19:34.415 json.nvme_error_information_log.table[11].lba.value = 0; 00:19:34.415 json.nvme_error_information_log.table[11].phase_tag = false; 00:19:34.415 json.nvme_error_information_log.table[11].status_field = {}; 00:19:34.415 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:19:34.415 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:19:34.415 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:19:34.415 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:19:34.415 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:19:34.415 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:19:34.415 json.nvme_error_information_log.table[12] = {}; 00:19:34.415 json.nvme_error_information_log.table[12].error_count = 7929; 00:19:34.416 json.nvme_error_information_log.table[12].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[12].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[12].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[12].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[13] = {}; 00:19:34.416 json.nvme_error_information_log.table[13].error_count = 7928; 00:19:34.416 json.nvme_error_information_log.table[13].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[13].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[13].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[13].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[14] = {}; 00:19:34.416 json.nvme_error_information_log.table[14].error_count = 7927; 00:19:34.416 json.nvme_error_information_log.table[14].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[14].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[14].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[14].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:19:34.416 json.nvme_error_information_log.table[15] = {}; 00:19:34.416 json.nvme_error_information_log.table[15].error_count = 7926; 00:19:34.416 json.nvme_error_information_log.table[15].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[15].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[15].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[15].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[1].error_count = 7940; 00:19:34.416 json.nvme_error_information_log.table[1].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[1].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[1].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[1].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[2] = {}; 00:19:34.416 json.nvme_error_information_log.table[2].error_count = 7939; 00:19:34.416 json.nvme_error_information_log.table[2].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[2].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[2].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[2].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:19:34.416 json.nvme_error_information_log.table[3] = {}; 00:19:34.416 json.nvme_error_information_log.table[3].error_count = 7938; 00:19:34.416 json.nvme_error_information_log.table[3].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[3].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[3].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[3].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[4] = {}; 00:19:34.416 json.nvme_error_information_log.table[4].error_count = 7937; 00:19:34.416 json.nvme_error_information_log.table[4].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[4].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[4].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[4].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[5] = {}; 00:19:34.416 json.nvme_error_information_log.table[5].error_count = 7936; 00:19:34.416 json.nvme_error_information_log.table[5].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[5].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[5].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[5].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:19:34.416 json.nvme_error_information_log.table[6] = {}; 00:19:34.416 json.nvme_error_information_log.table[6].error_count = 7935; 00:19:34.416 json.nvme_error_information_log.table[6].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[6].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[6].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[6].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[7] = {}; 00:19:34.416 json.nvme_error_information_log.table[7].error_count = 7934; 00:19:34.416 json.nvme_error_information_log.table[7].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[7].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[7].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[7].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:19:34.416 json.nvme_error_information_log.table[8] = {}; 00:19:34.416 json.nvme_error_information_log.table[8].error_count = 7933; 00:19:34.416 json.nvme_error_information_log.table[8].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[8].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[8].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[8].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:19:34.416 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:19:34.416 json.nvme_error_information_log.table[9] = {}; 00:19:34.416 json.nvme_error_information_log.table[9].error_count = 7932; 00:19:34.416 json.nvme_error_information_log.table[9].lba = {}; 00:19:34.416 json.nvme_error_information_log.table[9].lba.value = 0; 00:19:34.416 json.nvme_error_information_log.table[9].phase_tag = false; 00:19:34.416 json.nvme_error_information_log.table[9].status_field = {}; 00:19:34.416 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:19:34.416 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:19:34.416 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:19:34.416 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:19:34.416 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:19:34.417 json.nvme_error_information_log.unread = 48; 00:19:34.417 json.nvme_ieee_oui_identifier = 6083300; 00:19:34.417 json.nvme_number_of_namespaces = 128; 00:19:34.417 json.nvme_pci_vendor = {}; 00:19:34.417 json.nvme_pci_vendor.id = 32902; 00:19:34.417 json.nvme_pci_vendor.subsystem_id = 32902; 00:19:34.417 json.nvme_smart_health_information_log = {}; 00:19:34.417 json.nvme_smart_health_information_log.available_spare = 100; 00:19:34.417 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:19:34.417 json.nvme_smart_health_information_log.controller_busy_time = 604; 00:19:34.417 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:19:34.417 json.nvme_smart_health_information_log.critical_warning = 0; 00:19:34.417 json.nvme_smart_health_information_log.data_units_read = 103469209; 00:19:34.417 json.nvme_smart_health_information_log.data_units_written = 227640082; 00:19:34.417 json.nvme_smart_health_information_log.host_reads = 6942810030; 00:19:34.417 json.nvme_smart_health_information_log.host_writes = 8109758874; 00:19:34.417 json.nvme_smart_health_information_log.media_errors = 0; 00:19:34.417 json.nvme_smart_health_information_log.num_err_log_entries = 7941; 00:19:34.417 json.nvme_smart_health_information_log.percentage_used = 6; 00:19:34.417 json.nvme_smart_health_information_log.power_cycles = 97; 00:19:34.417 json.nvme_smart_health_information_log.power_on_hours = 39056; 00:19:34.417 json.nvme_smart_health_information_log.temperature = 36; 00:19:34.417 json.nvme_smart_health_information_log.unsafe_shutdowns = 77; 00:19:34.417 json.nvme_smart_health_information_log.warning_temp_time = 474; 00:19:34.417 json.nvme_total_capacity = 4000787030016; 00:19:34.417 json.nvme_unallocated_capacity = 0; 00:19:34.417 json.nvme_version = {}; 00:19:34.417 json.nvme_version.string = "1.2"; 00:19:34.417 json.nvme_version.value = 66048; 00:19:34.417 json.power_cycle_count = 97; 00:19:34.417 json.power_on_time = {}; 00:19:34.417 json.power_on_time.hours = 39056; 00:19:34.417 json.serial_number = "BTLJ8234018V4P0DGN"; 00:19:34.417 json.smartctl = {}; 00:19:34.417 json.smartctl.argv = []; 00:19:34.417 json.smartctl.argv[0] = "smartctl"; 00:19:34.417 json.smartctl.argv[1] = "-d"; 00:19:34.417 json.smartctl.argv[2] = "nvme"; 00:19:34.417 json.smartctl.argv[3] = "--json=g"; 00:19:34.417 json.smartctl.argv[4] = "-a"; 00:19:34.417 json.smartctl.build_info = "(local build)"; 00:19:34.417 json.smartctl.exit_status = 0; 00:19:34.417 json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64"; 00:19:34.417 json.smartctl.pre_release = false; 00:19:34.417 json.smartctl.svn_revision = "5530"; 00:19:34.417 json.smartctl.version = []; 00:19:34.417 json.smartctl.version[0] = 7; 00:19:34.417 json.smartctl.version[1] = 4; 00:19:34.417 json.smart_status = {}; 00:19:34.417 json.smart_status.nvme = {}; 00:19:34.417 json.smart_status.nvme.value = 0; 00:19:34.417 json.smart_status.passed = true; 00:19:34.417 json.smart_support = {}; 00:19:34.417 json.smart_support.available = true; 00:19:34.417 json.smart_support.enabled = true; 00:19:34.417 json.temperature = {}; 00:19:34.417 json.temperature.current = 36;' 00:19:34.417 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {}; 00:19:34.417 json.device = {}; 00:19:34.417 json.device.protocol = "NVMe"; 00:19:34.417 json.device.type = "nvme"; 00:19:34.417 json.firmware_version = "VDV1Y295"; 00:19:34.417 json.json_format_version = []; 00:19:34.417 json.json_format_version[0] = 1; 00:19:34.417 json.json_format_version[1] = 0; 00:19:34.417 json.local_time = {}; 00:19:34.417 json.local_time.asctime = "Thu Dec 5 13:51:05 2024 CET"; 00:19:34.417 json.local_time.time_t = 1733403065; 00:19:34.417 json.model_name = "INTEL SSDPE2KX040T8"; 00:19:34.417 json.nvme_controller_id = 0; 00:19:34.417 json.nvme_error_information_log = {}; 00:19:34.417 json.nvme_error_information_log.read = 16; 00:19:34.417 json.nvme_error_information_log.size = 64; 00:19:34.417 json.nvme_error_information_log.table = []; 00:19:34.417 json.nvme_error_information_log.table[0] = {}; 00:19:34.417 json.nvme_error_information_log.table[0].error_count = 7941; 00:19:34.417 json.nvme_error_information_log.table[0].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[0].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[0].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[0].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:19:34.417 json.nvme_error_information_log.table[1] = {}; 00:19:34.417 json.nvme_error_information_log.table[10] = {}; 00:19:34.417 json.nvme_error_information_log.table[10].error_count = 7931; 00:19:34.417 json.nvme_error_information_log.table[10].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[10].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[10].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[10].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:19:34.417 json.nvme_error_information_log.table[11] = {}; 00:19:34.417 json.nvme_error_information_log.table[11].error_count = 7930; 00:19:34.417 json.nvme_error_information_log.table[11].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[11].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[11].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[11].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:19:34.417 json.nvme_error_information_log.table[12] = {}; 00:19:34.417 json.nvme_error_information_log.table[12].error_count = 7929; 00:19:34.417 json.nvme_error_information_log.table[12].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[12].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[12].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[12].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:19:34.417 json.nvme_error_information_log.table[13] = {}; 00:19:34.417 json.nvme_error_information_log.table[13].error_count = 7928; 00:19:34.417 json.nvme_error_information_log.table[13].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[13].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[13].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[13].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:19:34.417 json.nvme_error_information_log.table[14] = {}; 00:19:34.417 json.nvme_error_information_log.table[14].error_count = 7927; 00:19:34.417 json.nvme_error_information_log.table[14].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[14].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[14].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[14].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:19:34.417 json.nvme_error_information_log.table[15] = {}; 00:19:34.417 json.nvme_error_information_log.table[15].error_count = 7926; 00:19:34.417 json.nvme_error_information_log.table[15].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[15].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[15].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[15].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:19:34.417 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:19:34.417 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:19:34.417 json.nvme_error_information_log.table[1].error_count = 7940; 00:19:34.417 json.nvme_error_information_log.table[1].lba = {}; 00:19:34.417 json.nvme_error_information_log.table[1].lba.value = 0; 00:19:34.417 json.nvme_error_information_log.table[1].phase_tag = false; 00:19:34.417 json.nvme_error_information_log.table[1].status_field = {}; 00:19:34.417 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:19:34.417 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:19:34.417 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:19:34.417 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:19:34.418 json.nvme_error_information_log.table[2] = {}; 00:19:34.418 json.nvme_error_information_log.table[2].error_count = 7939; 00:19:34.418 json.nvme_error_information_log.table[2].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[2].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[2].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[2].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:19:34.418 json.nvme_error_information_log.table[3] = {}; 00:19:34.418 json.nvme_error_information_log.table[3].error_count = 7938; 00:19:34.418 json.nvme_error_information_log.table[3].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[3].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[3].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[3].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:19:34.418 json.nvme_error_information_log.table[4] = {}; 00:19:34.418 json.nvme_error_information_log.table[4].error_count = 7937; 00:19:34.418 json.nvme_error_information_log.table[4].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[4].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[4].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[4].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:19:34.418 json.nvme_error_information_log.table[5] = {}; 00:19:34.418 json.nvme_error_information_log.table[5].error_count = 7936; 00:19:34.418 json.nvme_error_information_log.table[5].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[5].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[5].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[5].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:19:34.418 json.nvme_error_information_log.table[6] = {}; 00:19:34.418 json.nvme_error_information_log.table[6].error_count = 7935; 00:19:34.418 json.nvme_error_information_log.table[6].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[6].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[6].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[6].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:19:34.418 json.nvme_error_information_log.table[7] = {}; 00:19:34.418 json.nvme_error_information_log.table[7].error_count = 7934; 00:19:34.418 json.nvme_error_information_log.table[7].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[7].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[7].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[7].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:19:34.418 json.nvme_error_information_log.table[8] = {}; 00:19:34.418 json.nvme_error_information_log.table[8].error_count = 7933; 00:19:34.418 json.nvme_error_information_log.table[8].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[8].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[8].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[8].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:19:34.418 json.nvme_error_information_log.table[9] = {}; 00:19:34.418 json.nvme_error_information_log.table[9].error_count = 7932; 00:19:34.418 json.nvme_error_information_log.table[9].lba = {}; 00:19:34.418 json.nvme_error_information_log.table[9].lba.value = 0; 00:19:34.418 json.nvme_error_information_log.table[9].phase_tag = false; 00:19:34.418 json.nvme_error_information_log.table[9].status_field = {}; 00:19:34.418 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:19:34.418 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:19:34.418 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:19:34.418 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:19:34.418 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:19:34.418 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:19:34.418 json.nvme_error_information_log.unread = 48; 00:19:34.418 json.nvme_ieee_oui_identifier = 6083300; 00:19:34.418 json.nvme_number_of_namespaces = 128; 00:19:34.418 json.nvme_pci_vendor = {}; 00:19:34.418 json.nvme_pci_vendor.id = 32902; 00:19:34.418 json.nvme_pci_vendor.subsystem_id = 32902; 00:19:34.418 json.nvme_smart_health_information_log = {}; 00:19:34.418 json.nvme_smart_health_information_log.available_spare = 100; 00:19:34.418 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:19:34.418 json.nvme_smart_health_information_log.controller_busy_time = 604; 00:19:34.418 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:19:34.418 json.nvme_smart_health_information_log.critical_warning = 0; 00:19:34.418 json.nvme_smart_health_information_log.data_units_read = 103469212; 00:19:34.418 json.nvme_smart_health_information_log.data_units_written = 227640082; 00:19:34.418 json.nvme_smart_health_information_log.host_reads = 6942810085; 00:19:34.419 json.nvme_smart_health_information_log.host_writes = 8109758874; 00:19:34.419 json.nvme_smart_health_information_log.media_errors = 0; 00:19:34.419 json.nvme_smart_health_information_log.num_err_log_entries = 7941; 00:19:34.419 json.nvme_smart_health_information_log.percentage_used = 6; 00:19:34.419 json.nvme_smart_health_information_log.power_cycles = 97; 00:19:34.419 json.nvme_smart_health_information_log.power_on_hours = 39056; 00:19:34.419 json.nvme_smart_health_information_log.temperature = 36; 00:19:34.419 json.nvme_smart_health_information_log.unsafe_shutdowns = 77; 00:19:34.419 json.nvme_smart_health_information_log.warning_temp_time = 474; 00:19:34.419 json.nvme_total_capacity = 4000787030016; 00:19:34.419 json.nvme_unallocated_capacity = 0; 00:19:34.419 json.nvme_version = {}; 00:19:34.419 json.nvme_version.string = "1.2"; 00:19:34.419 json.nvme_version.value = 66048; 00:19:34.419 json.power_cycle_count = 97; 00:19:34.419 json.power_on_time = {}; 00:19:34.419 json.power_on_time.hours = 39056; 00:19:34.419 json.serial_number = "BTLJ8234018V4P0DGN"; 00:19:34.419 json.smartctl = {}; 00:19:34.419 json.smartctl.argv = []; 00:19:34.419 json.smartctl.argv[0] = "smartctl"; 00:19:34.419 json.smartctl.argv[1] = "-d"; 00:19:34.419 json.smartctl.argv[2] = "nvme"; 00:19:34.419 json.smartctl.argv[3] = "--json=g"; 00:19:34.419 json.smartctl.argv[4] = "-a"; 00:19:34.419 json.smartctl.build_info = "(local build)"; 00:19:34.419 json.smartctl.exit_status = 0; 00:19:34.419 json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64"; 00:19:34.419 json.smartctl.pre_release = false; 00:19:34.419 json.smartctl.svn_revision = "5530"; 00:19:34.419 json.smartctl.version = []; 00:19:34.419 json.smartctl.version[0] = 7; 00:19:34.419 json.smartctl.version[1] = 4; 00:19:34.419 json.smart_status = {}; 00:19:34.419 json.smart_status.nvme = {}; 00:19:34.419 json.smart_status.nvme.value = 0; 00:19:34.419 json.smart_status.passed = true; 00:19:34.419 json.smart_support = {}; 00:19:34.419 json.smart_support.available = true; 00:19:34.419 json.smart_support.enabled = true; 00:19:34.419 json.temperature = {}; 00:19:34.419 json.temperature.current = 36;' 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@51 -- # true 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@51 -- # DIFF_SMART_JSON='json.local_time.asctime = "Thu Dec 5 13:50:49 2024 CET"; 00:19:34.419 json.local_time.time_t = 1733403049; 00:19:34.419 json.nvme_smart_health_information_log.data_units_read = 103469209; 00:19:34.419 json.nvme_smart_health_information_log.host_reads = 6942810030;' 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@54 -- # grep -v 'json\.nvme_smart_health_information_log\.\|json\.local_time\.\|json\.temperature\.\|json\.power_on_time\.hours' 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@54 -- # true 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@54 -- # ERR_SMART_JSON= 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@56 -- # '[' -n '' ']' 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@61 -- # smartctl -d nvme -l error /dev/spdk/nvme0 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@61 -- # CUSE_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.419 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.419 00:19:34.419 === START OF SMART DATA SECTION === 00:19:34.419 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:19:34.419 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:19:34.419 0 7941 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 1 7940 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 2 7939 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 3 7938 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 4 7937 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 5 7936 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 6 7935 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 7 7934 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 8 7933 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 9 7932 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 10 7931 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 11 7930 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 12 7929 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 13 7928 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 14 7927 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 15 7926 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 ... (48 entries not read)' 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@62 -- # '[' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.419 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.419 00:19:34.419 === START OF SMART DATA SECTION === 00:19:34.419 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:19:34.419 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:19:34.419 0 7941 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 1 7940 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 2 7939 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 3 7938 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 4 7937 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 5 7936 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 6 7935 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 7 7934 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 8 7933 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 9 7932 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 10 7931 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 11 7930 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 12 7929 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 13 7928 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 14 7927 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 15 7926 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 ... (48 entries not read)' '!=' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.419 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.419 00:19:34.419 === START OF SMART DATA SECTION === 00:19:34.419 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:19:34.419 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:19:34.419 0 7941 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 1 7940 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 2 7939 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 3 7938 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 4 7937 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 5 7936 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 6 7935 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 7 7934 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 8 7933 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 9 7932 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 10 7931 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 11 7930 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 12 7929 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 13 7928 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 14 7927 0 - 0xc00c - 0 - - Internal Error 00:19:34.419 15 7926 2 - 0xc00c - 0 - - Internal Error 00:19:34.419 ... (48 entries not read)' ']' 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@68 -- # smartctl -d nvme -i /dev/spdk/nvme0n1 00:19:34.419 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.419 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.419 00:19:34.419 === START OF INFORMATION SECTION === 00:19:34.419 Model Number: INTEL SSDPE2KX040T8 00:19:34.419 Serial Number: BTLJ8234018V4P0DGN 00:19:34.419 Firmware Version: VDV1Y295 00:19:34.419 PCI Vendor/Subsystem ID: 0x8086 00:19:34.419 IEEE OUI Identifier: 0x5cd2e4 00:19:34.419 Total NVM Capacity: 4,000,787,030,016 [4.00 TB] 00:19:34.419 Unallocated NVM Capacity: 0 00:19:34.419 Controller ID: 0 00:19:34.419 NVMe Version: 1.2 00:19:34.419 Number of Namespaces: 128 00:19:34.419 Namespace 1 Size/Capacity: 4,000,787,030,016 [4.00 TB] 00:19:34.419 Namespace 1 Formatted LBA Size: 512 00:19:34.419 Namespace 1 IEEE EUI-64: 000000 000000d914 00:19:34.419 Local Time is: Thu Dec 5 13:51:05 2024 CET 00:19:34.419 00:19:34.419 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@69 -- # smartctl -d nvme -c /dev/spdk/nvme0 00:19:34.419 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.419 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.419 00:19:34.419 === START OF INFORMATION SECTION === 00:19:34.419 Firmware Updates (0x18): 4 Slots, no Reset required 00:19:34.419 Optional Admin Commands (0x000e): Format Frmw_DL NS_Mngmt 00:19:34.419 Optional NVM Commands (0x0006): Wr_Unc DS_Mngmt 00:19:34.419 Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg 00:19:34.419 Maximum Data Transfer Size: 32 Pages 00:19:34.419 Warning Comp. Temp. Threshold: 70 Celsius 00:19:34.419 Critical Comp. Temp. Threshold: 80 Celsius 00:19:34.419 00:19:34.419 Supported Power States 00:19:34.419 St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 00:19:34.420 0 + 20.00W - - 0 0 0 0 0 0 00:19:34.420 00:19:34.420 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@70 -- # smartctl -d nvme -A /dev/spdk/nvme0 00:19:34.420 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.420 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.420 00:19:34.420 === START OF SMART DATA SECTION === 00:19:34.420 SMART/Health Information (NVMe Log 0x02) 00:19:34.420 Critical Warning: 0x00 00:19:34.420 Temperature: 36 Celsius 00:19:34.420 Available Spare: 100% 00:19:34.420 Available Spare Threshold: 10% 00:19:34.420 Percentage Used: 6% 00:19:34.420 Data Units Read: 103,469,212 [52.9 TB] 00:19:34.420 Data Units Written: 227,640,082 [116 TB] 00:19:34.420 Host Read Commands: 6,942,810,085 00:19:34.420 Host Write Commands: 8,109,758,874 00:19:34.420 Controller Busy Time: 604 00:19:34.420 Power Cycles: 97 00:19:34.420 Power On Hours: 39,056 00:19:34.420 Unsafe Shutdowns: 77 00:19:34.420 Media and Data Integrity Errors: 0 00:19:34.420 Error Information Log Entries: 7,941 00:19:34.420 Warning Comp. Temperature Time: 474 00:19:34.420 Critical Comp. Temperature Time: 0 00:19:34.420 00:19:34.420 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@73 -- # smartctl -d nvme -x /dev/spdk/nvme0 00:19:34.420 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.420 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.420 00:19:34.420 === START OF INFORMATION SECTION === 00:19:34.420 Model Number: INTEL SSDPE2KX040T8 00:19:34.420 Serial Number: BTLJ8234018V4P0DGN 00:19:34.420 Firmware Version: VDV1Y295 00:19:34.420 PCI Vendor/Subsystem ID: 0x8086 00:19:34.420 IEEE OUI Identifier: 0x5cd2e4 00:19:34.420 Total NVM Capacity: 4,000,787,030,016 [4.00 TB] 00:19:34.420 Unallocated NVM Capacity: 0 00:19:34.420 Controller ID: 0 00:19:34.420 NVMe Version: 1.2 00:19:34.420 Number of Namespaces: 128 00:19:34.420 Local Time is: Thu Dec 5 13:51:05 2024 CET 00:19:34.420 Firmware Updates (0x18): 4 Slots, no Reset required 00:19:34.420 Optional Admin Commands (0x000e): Format Frmw_DL NS_Mngmt 00:19:34.420 Optional NVM Commands (0x0006): Wr_Unc DS_Mngmt 00:19:34.420 Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg 00:19:34.420 Maximum Data Transfer Size: 32 Pages 00:19:34.420 Warning Comp. Temp. Threshold: 70 Celsius 00:19:34.420 Critical Comp. Temp. Threshold: 80 Celsius 00:19:34.420 00:19:34.420 Supported Power States 00:19:34.420 St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 00:19:34.420 0 + 20.00W - - 0 0 0 0 0 0 00:19:34.420 00:19:34.420 === START OF SMART DATA SECTION === 00:19:34.420 SMART overall-health self-assessment test result: PASSED 00:19:34.420 00:19:34.420 SMART/Health Information (NVMe Log 0x02) 00:19:34.420 Critical Warning: 0x00 00:19:34.420 Temperature: 36 Celsius 00:19:34.420 Available Spare: 100% 00:19:34.420 Available Spare Threshold: 10% 00:19:34.420 Percentage Used: 6% 00:19:34.420 Data Units Read: 103,469,212 [52.9 TB] 00:19:34.420 Data Units Written: 227,640,082 [116 TB] 00:19:34.420 Host Read Commands: 6,942,810,085 00:19:34.420 Host Write Commands: 8,109,758,874 00:19:34.420 Controller Busy Time: 604 00:19:34.420 Power Cycles: 97 00:19:34.420 Power On Hours: 39,056 00:19:34.420 Unsafe Shutdowns: 77 00:19:34.420 Media and Data Integrity Errors: 0 00:19:34.420 Error Information Log Entries: 7,941 00:19:34.420 Warning Comp. Temperature Time: 474 00:19:34.420 Critical Comp. Temperature Time: 0 00:19:34.420 00:19:34.420 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:19:34.420 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:19:34.420 0 7941 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 1 7940 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 2 7939 0 - 0xc00c - 0 - - Internal Error 00:19:34.420 3 7938 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 4 7937 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 5 7936 0 - 0xc00c - 0 - - Internal Error 00:19:34.420 6 7935 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 7 7934 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 8 7933 0 - 0xc00c - 0 - - Internal Error 00:19:34.420 9 7932 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 10 7931 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 11 7930 0 - 0xc00c - 0 - - Internal Error 00:19:34.420 12 7929 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 13 7928 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 14 7927 0 - 0xc00c - 0 - - Internal Error 00:19:34.420 15 7926 2 - 0xc00c - 0 - - Internal Error 00:19:34.420 ... (48 entries not read) 00:19:34.420 00:19:34.420 Self-tests not supported 00:19:34.420 00:19:34.420 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@74 -- # smartctl -d nvme -H /dev/spdk/nvme0 00:19:34.420 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build) 00:19:34.420 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:19:34.420 00:19:34.420 === START OF SMART DATA SECTION === 00:19:34.420 SMART overall-health self-assessment test result: PASSED 00:19:34.420 00:19:34.420 13:51:05 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:35.357 [2024-12-05 13:51:06.844108] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:19:38.642 13:51:09 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@77 -- # sleep 1 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@78 -- # '[' -c /dev/spdk/nvme1 ']' 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@82 -- # trap - SIGINT SIGTERM EXIT 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- cuse/spdk_smartctl_cuse.sh@83 -- # killprocess 3915507 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@954 -- # '[' -z 3915507 ']' 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@958 -- # kill -0 3915507 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@959 -- # uname 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3915507 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3915507' 00:19:39.580 killing process with pid 3915507 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@973 -- # kill 3915507 00:19:39.580 13:51:10 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@978 -- # wait 3915507 00:19:40.150 00:19:40.150 real 0m34.552s 00:19:40.150 user 0m34.029s 00:19:40.150 sys 0m9.275s 00:19:40.150 13:51:11 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.150 13:51:11 nvme_cuse.nvme_smartctl_cuse -- common/autotest_common.sh@10 -- # set +x 00:19:40.150 ************************************ 00:19:40.150 END TEST nvme_smartctl_cuse 00:19:40.150 ************************************ 00:19:40.150 13:51:11 nvme_cuse -- cuse/nvme_cuse.sh@22 -- # run_test nvme_ns_manage_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh 00:19:40.150 13:51:11 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.150 13:51:11 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.150 13:51:11 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:19:40.150 ************************************ 00:19:40.150 START TEST nvme_ns_manage_cuse 00:19:40.150 ************************************ 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh 00:19:40.150 * Looking for test storage... 00:19:40.150 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@344 -- # case "$op" in 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@345 -- # : 1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@365 -- # decimal 1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@353 -- # local d=1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@355 -- # echo 1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@366 -- # decimal 2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@353 -- # local d=2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@355 -- # echo 2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.150 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@368 -- # return 0 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.151 --rc genhtml_branch_coverage=1 00:19:40.151 --rc genhtml_function_coverage=1 00:19:40.151 --rc genhtml_legend=1 00:19:40.151 --rc geninfo_all_blocks=1 00:19:40.151 --rc geninfo_unexecuted_blocks=1 00:19:40.151 00:19:40.151 ' 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.151 --rc genhtml_branch_coverage=1 00:19:40.151 --rc genhtml_function_coverage=1 00:19:40.151 --rc genhtml_legend=1 00:19:40.151 --rc geninfo_all_blocks=1 00:19:40.151 --rc geninfo_unexecuted_blocks=1 00:19:40.151 00:19:40.151 ' 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.151 --rc genhtml_branch_coverage=1 00:19:40.151 --rc genhtml_function_coverage=1 00:19:40.151 --rc genhtml_legend=1 00:19:40.151 --rc geninfo_all_blocks=1 00:19:40.151 --rc geninfo_unexecuted_blocks=1 00:19:40.151 00:19:40.151 ' 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.151 --rc genhtml_branch_coverage=1 00:19:40.151 --rc genhtml_function_coverage=1 00:19:40.151 --rc genhtml_legend=1 00:19:40.151 --rc geninfo_all_blocks=1 00:19:40.151 --rc geninfo_unexecuted_blocks=1 00:19:40.151 00:19:40.151 ' 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:19:40.151 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- paths/export.sh@5 -- # export PATH 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@10 -- # ctrls=() 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:40.410 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@11 -- # nvmes=() 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@12 -- # bdfs=() 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@14 -- # nvme_name= 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:19:40.411 13:51:11 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:19:43.704 Waiting for block devices as requested 00:19:43.704 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:43.704 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:43.970 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:43.970 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:43.970 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:43.970 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:44.228 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:44.228 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:44.228 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:44.487 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:44.487 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@11 -- # scan_nvme_ctrls 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@49 -- # pci=0000:d8:00.0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@50 -- # pci_can_use 0000:d8:00.0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@18 -- # local i 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@21 -- # [[ =~ 0000:d8:00.0 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- scripts/common.sh@27 -- # return 0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@18 -- # shift 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n BTLJ8234018V4P0DGN ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ8234018V4P0DGN "' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ8234018V4P0DGN ' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX040T8 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8 "' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8 ' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n VDV1Y295 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV1Y295"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[fr]=VDV1Y295 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x989680 ]] 00:19:45.428 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe4e1c0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x18 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[frmw]=0x18 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.429 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:19:45.430 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:19:45.431 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@18 -- # shift 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x1d1c0beb0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x1d1c0beb0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x1d1c0beb0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x1d1c0beb0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x1d1c0beb0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x1d1c0beb0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="1"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=1 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[flbas]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[mc]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[dpc]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="4,000,787,030,016"' 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=4,000,787,030,016 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.432 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[mssrl]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[mcl]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[msrc]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 01000000d91400000000000000000000 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="01000000d91400000000000000000000"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[nguid]=01000000d91400000000000000000000 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 000000000000d914 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="000000000000d914"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[eui64]=000000000000d914 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@18 -- # shift 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:19:45.433 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.693 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 01000000d91400000000000000000000 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000d91400000000000000000000"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000d91400000000000000000000 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n 000000000000d914 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="000000000000d914"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[eui64]=000000000000d914 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # IFS=: 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:d8:00.0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@14 -- # get_nvme_with_ns_management 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@155 -- # local _ctrls 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@157 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@157 -- # get_nvmes_with_ns_management 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@144 -- # (( 1 == 0 )) 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@146 -- # local ctrl 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}" 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@122 -- # local -A bits 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@125 -- # bits["ss/sr"]=1 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@126 -- # bits["fnvme"]=2 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@127 -- # bits["fc/fi"]=4 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@128 -- # bits["nsmgt"]=8 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@129 -- # bits["self-test"]=16 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@130 -- # bits["directives"]=32 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@132 -- # bits["virtmgt"]=128 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@134 -- # bits["getlba"]=512 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@137 -- # bit=nsmgt 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@138 -- # [[ -n 8 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@75 -- # [[ -n 0xe ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@76 -- # echo 0xe 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] )) 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@148 -- # echo nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@151 -- # return 0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@158 -- # (( 1 > 0 )) 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@159 -- # echo nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@160 -- # return 0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@14 -- # nvme_name=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@20 -- # nvme_dev=/dev/nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@21 -- # bdf=0000:d8:00.0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@22 -- # nsids=($(get_nvme_nss "$nvme_name")) 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@22 -- # get_nvme_nss nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@94 -- # local ctrl=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@96 -- # [[ -n nvme0_ns ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@97 -- # local -n _nss=nvme0_ns 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@99 -- # echo 1 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@25 -- # get_nvme_ctrl_feature nvme0 oaes 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oaes 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@75 -- # [[ -n 0x200 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@76 -- # echo 0x200 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@25 -- # oaes=0x200 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@26 -- # aer_ns_change=0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@27 -- # get_nvme_ctrl_feature nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=cntlid 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@75 -- # [[ -n 0 ]] 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@76 -- # echo 0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@27 -- # cntlid=0 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@70 -- # remove_all_namespaces 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:19:45.694 --- 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces' 00:19:45.694 delete all namespaces 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:19:45.694 --- 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}" 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1' 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:19:45.694 --- 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1' 00:19:45.694 removing nsid=1 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:19:45.694 --- 00:19:45.694 13:51:16 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0 00:19:45.694 detach-ns: Success, nsid:1 00:19:45.694 13:51:17 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1 00:20:07.631 delete-ns: Success, deleted nsid:1 00:20:07.631 13:51:35 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@72 -- # reset_nvme_if_aer_unsupported /dev/nvme0 00:20:07.631 13:51:35 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]] 00:20:07.631 13:51:35 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1 00:20:07.631 13:51:36 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:20:07.631 13:51:36 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@73 -- # sleep 1 00:20:07.631 13:51:37 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@75 -- # PCI_ALLOWED=0000:d8:00.0 00:20:07.631 13:51:37 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@75 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:20:09.007 0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7 00:20:09.007 0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6 00:20:09.007 0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5 00:20:09.007 0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4 00:20:09.007 0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3 00:20:09.007 0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2 00:20:09.007 0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1 00:20:09.007 0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0 00:20:09.007 0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7 00:20:09.007 0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6 00:20:09.007 0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5 00:20:09.007 0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4 00:20:09.007 0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3 00:20:09.007 0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2 00:20:09.007 0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1 00:20:09.007 0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0 00:20:13.197 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@78 -- # spdk_tgt_pid=3921644 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@79 -- # trap 'kill -9 ${spdk_tgt_pid}; clean_up; exit 1' SIGINT SIGTERM EXIT 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@81 -- # waitforlisten 3921644 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@835 -- # '[' -z 3921644 ']' 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.455 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.456 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.456 13:51:44 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@10 -- # set +x 00:20:13.456 [2024-12-05 13:51:44.949811] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:20:13.456 [2024-12-05 13:51:44.949891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921644 ] 00:20:13.714 [2024-12-05 13:51:45.074783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:13.714 [2024-12-05 13:51:45.135168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.714 [2024-12-05 13:51:45.135175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.972 [2024-12-05 13:51:45.343572] 'OCF_Core' volume operations registered 00:20:13.972 [2024-12-05 13:51:45.343614] 'OCF_Cache' volume operations registered 00:20:13.972 [2024-12-05 13:51:45.348062] 'OCF Composite' volume operations registered 00:20:13.972 [2024-12-05 13:51:45.352513] 'SPDK_block_device' volume operations registered 00:20:14.230 13:51:45 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.230 13:51:45 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@868 -- # return 0 00:20:14.230 13:51:45 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:20:17.516 00:20:17.516 13:51:48 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:20:17.516 [2024-12-05 13:51:48.884948] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:20:17.516 [2024-12-05 13:51:48.885002] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:20:17.516 [2024-12-05 13:51:48.885125] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:20:17.516 13:51:48 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@86 -- # ctrlr=/dev/spdk/nvme0 00:20:17.516 13:51:48 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@88 -- # sleep 1 00:20:18.470 13:51:49 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@89 -- # [[ -c /dev/spdk/nvme0 ]] 00:20:18.470 13:51:49 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@94 -- # sleep 1 00:20:19.405 13:51:50 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@96 -- # for nsid in "${nsids[@]}" 00:20:19.405 13:51:50 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@97 -- # info_print 'create ns: nsze=10000 ncap=10000 flbias=0' 00:20:19.405 13:51:50 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:20:19.405 --- 00:20:19.405 13:51:50 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'create ns: nsze=10000 ncap=10000 flbias=0' 00:20:19.405 create ns: nsze=10000 ncap=10000 flbias=0 00:20:19.405 13:51:50 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:20:19.405 --- 00:20:19.405 13:51:50 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@98 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/spdk/nvme0 -s 10000 -c 10000 -f 0 00:20:19.971 create-ns: Success, created nsid:1 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@99 -- # info_print 'attach ns: nsid=1 controller=0' 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:20:19.971 --- 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'attach ns: nsid=1 controller=0' 00:20:19.971 attach ns: nsid=1 controller=0 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:20:19.971 --- 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@100 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/spdk/nvme0 -n 1 -c 0 00:20:19.971 attach-ns: Success, nsid:1 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@101 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]] 00:20:19.971 13:51:51 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1 00:20:21.344 13:51:52 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:20:21.344 [2024-12-05 13:51:52.495531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:20:21.344 [2024-12-05 13:51:52.496522] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:20:21.344 13:51:52 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@102 -- # sleep 1 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@103 -- # [[ -c /dev/spdk/nvme0n1 ]] 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@104 -- # info_print 'detach ns: nsid=1 controller=0' 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:20:22.279 --- 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'detach ns: nsid=1 controller=0' 00:20:22.279 detach ns: nsid=1 controller=0 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:20:22.279 --- 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@105 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/spdk/nvme0 -n 1 -c 0 00:20:22.279 detach-ns: Success, nsid:1 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@106 -- # info_print 'delete ns: nsid=1' 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:20:22.279 --- 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete ns: nsid=1' 00:20:22.279 delete ns: nsid=1 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:20:22.279 --- 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/spdk/nvme0 -n 1 00:20:22.279 delete-ns: Success, deleted nsid:1 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@108 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]] 00:20:22.279 13:51:53 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1 00:20:23.215 13:51:54 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:20:23.215 [2024-12-05 13:51:54.561418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:d8:00.0, 0] resetting controller 00:20:23.215 13:51:54 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@109 -- # sleep 1 00:20:24.150 13:51:55 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@110 -- # [[ ! -c /dev/spdk/nvme0n1 ]] 00:20:24.150 13:51:55 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:25.085 [2024-12-05 13:51:56.565370] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:20:28.369 13:51:59 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@120 -- # sleep 1 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@121 -- # [[ ! -c /dev/spdk/nvme0 ]] 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@124 -- # killprocess 3921644 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@954 -- # '[' -z 3921644 ']' 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@958 -- # kill -0 3921644 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@959 -- # uname 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3921644 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3921644' 00:20:29.301 killing process with pid 3921644 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@973 -- # kill 3921644 00:20:29.301 13:52:00 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@978 -- # wait 3921644 00:20:29.868 13:52:01 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@125 -- # clean_up 00:20:29.869 13:52:01 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:20:33.165 Waiting for block devices as requested 00:20:33.165 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:20:33.165 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:20:33.422 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:20:33.422 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:20:33.422 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:20:33.422 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:20:33.422 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:20:38.690 * Events for some block/disk devices (0000:d8:00.0) were not caught, they may be missing 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@48 -- # remove_all_namespaces 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces' 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:20:38.690 --- 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces' 00:20:38.690 delete all namespaces 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:20:38.690 --- 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}" 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1' 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:20:38.690 --- 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1' 00:20:38.690 removing nsid=1 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:20:38.690 --- 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0 00:20:38.690 NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002) 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@41 -- # true 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1 00:20:38.690 NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002) 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@42 -- # true 00:20:38.690 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@50 -- # echo 'Restoring /dev/nvme0...' 00:20:38.690 Restoring /dev/nvme0... 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@51 -- # for nsid in "${nsids[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@52 -- # get_nvme_ns_feature nvme0 1 ncap 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=ncap 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@87 -- # local -n _ns=nvme0n1 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@90 -- # echo 0x1d1c0beb0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@52 -- # ncap=0x1d1c0beb0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@53 -- # get_nvme_ns_feature nvme0 1 nsze 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=nsze 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@87 -- # local -n _ns=nvme0n1 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@90 -- # echo 0x1d1c0beb0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@53 -- # nsze=0x1d1c0beb0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@54 -- # get_active_lbaf nvme0 1 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@103 -- # local ctrl=nvme0 ns=1 reg lbaf 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@105 -- # [[ -n nvme0_ns ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@107 -- # local -n _nss=nvme0_ns 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@108 -- # [[ -n nvme0n1 ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@110 -- # local -n _ns=nvme0n1 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ fpi == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nawupf == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nsfeat == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ endgid == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nawun == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nabspf == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nabo == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nabsn == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nulbaf == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ ncap == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ dpc == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ dps == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nguid == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ noiob == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nacwu == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ mssrl == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ dlfeat == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nlbaf == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ mc == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nmic == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ nvmsetid == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # continue 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@113 -- # [[ lbaf0 == lbaf* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@114 -- # [[ ms:0 lbads:9 rp:0x2 (in use) == *\i\n\ \u\s\e* ]] 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@115 -- # echo 0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- nvme/functions.sh@115 -- # return 0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@54 -- # lbaf=0 00:20:38.691 13:52:09 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/nvme0 -s 0x1d1c0beb0 -c 0x1d1c0beb0 -f 0 00:20:38.691 create-ns: Success, created nsid:1 00:20:38.691 13:52:10 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@56 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/nvme0 -n 1 -c 0 00:20:38.691 attach-ns: Success, nsid:1 00:20:38.691 13:52:10 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@57 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:20:38.691 13:52:10 nvme_cuse.nvme_ns_manage_cuse -- cuse/nvme_ns_manage_cuse.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:20:42.875 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:42.875 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:45.565 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:20:46.498 00:20:46.498 real 1m6.512s 00:20:46.498 user 0m37.846s 00:20:46.498 sys 0m12.845s 00:20:46.498 13:52:17 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.498 13:52:17 nvme_cuse.nvme_ns_manage_cuse -- common/autotest_common.sh@10 -- # set +x 00:20:46.498 ************************************ 00:20:46.498 END TEST nvme_ns_manage_cuse 00:20:46.498 ************************************ 00:20:46.755 13:52:18 nvme_cuse -- cuse/nvme_cuse.sh@23 -- # rmmod cuse 00:20:46.755 13:52:18 nvme_cuse -- cuse/nvme_cuse.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:20:50.033 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:20:50.033 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:50.599 00:20:50.599 real 2m56.967s 00:20:50.599 user 2m25.715s 00:20:50.599 sys 0m44.754s 00:20:50.599 13:52:21 nvme_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.599 13:52:21 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:20:50.599 ************************************ 00:20:50.599 END TEST nvme_cuse 00:20:50.599 ************************************ 00:20:50.599 13:52:22 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:20:50.599 13:52:22 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:20:50.599 13:52:22 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:20:50.599 13:52:22 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh 00:20:50.599 13:52:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.599 13:52:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.599 13:52:22 -- common/autotest_common.sh@10 -- # set +x 00:20:50.599 ************************************ 00:20:50.599 START TEST nvme_rpc 00:20:50.599 ************************************ 00:20:50.600 13:52:22 nvme_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh 00:20:50.857 * Looking for test storage... 00:20:50.857 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.857 13:52:22 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.857 --rc genhtml_branch_coverage=1 00:20:50.857 --rc genhtml_function_coverage=1 00:20:50.857 --rc genhtml_legend=1 00:20:50.857 --rc geninfo_all_blocks=1 00:20:50.857 --rc geninfo_unexecuted_blocks=1 00:20:50.857 00:20:50.857 ' 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.857 --rc genhtml_branch_coverage=1 00:20:50.857 --rc genhtml_function_coverage=1 00:20:50.857 --rc genhtml_legend=1 00:20:50.857 --rc geninfo_all_blocks=1 00:20:50.857 --rc geninfo_unexecuted_blocks=1 00:20:50.857 00:20:50.857 ' 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.857 --rc genhtml_branch_coverage=1 00:20:50.857 --rc genhtml_function_coverage=1 00:20:50.857 --rc genhtml_legend=1 00:20:50.857 --rc geninfo_all_blocks=1 00:20:50.857 --rc geninfo_unexecuted_blocks=1 00:20:50.857 00:20:50.857 ' 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.857 --rc genhtml_branch_coverage=1 00:20:50.857 --rc genhtml_function_coverage=1 00:20:50.857 --rc genhtml_legend=1 00:20:50.857 --rc geninfo_all_blocks=1 00:20:50.857 --rc geninfo_unexecuted_blocks=1 00:20:50.857 00:20:50.857 ' 00:20:50.857 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:20:50.857 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:50.857 13:52:22 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:20:51.114 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:d8:00.0 00:20:51.114 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=3928560 00:20:51.114 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:20:51.114 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:51.114 13:52:22 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 3928560 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 3928560 ']' 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.114 13:52:22 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:51.114 [2024-12-05 13:52:22.448855] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:20:51.114 [2024-12-05 13:52:22.448939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928560 ] 00:20:51.114 [2024-12-05 13:52:22.569399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:51.114 [2024-12-05 13:52:22.626763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.114 [2024-12-05 13:52:22.626770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.371 [2024-12-05 13:52:22.852504] 'OCF_Core' volume operations registered 00:20:51.371 [2024-12-05 13:52:22.852542] 'OCF_Cache' volume operations registered 00:20:51.371 [2024-12-05 13:52:22.856957] 'OCF Composite' volume operations registered 00:20:51.371 [2024-12-05 13:52:22.861383] 'SPDK_block_device' volume operations registered 00:20:51.628 13:52:23 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.628 13:52:23 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:51.628 13:52:23 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:20:54.901 Nvme0n1 00:20:54.901 13:52:26 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:54.901 13:52:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:54.901 request: 00:20:54.901 { 00:20:54.901 "bdev_name": "Nvme0n1", 00:20:54.901 "filename": "non_existing_file", 00:20:54.901 "method": "bdev_nvme_apply_firmware", 00:20:54.901 "req_id": 1 00:20:54.901 } 00:20:54.902 Got JSON-RPC error response 00:20:54.902 response: 00:20:54.902 { 00:20:54.902 "code": -32603, 00:20:54.902 "message": "open file failed." 00:20:54.902 } 00:20:54.902 13:52:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:54.902 13:52:26 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:54.902 13:52:26 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:59.086 13:52:30 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:59.086 13:52:30 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 3928560 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 3928560 ']' 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 3928560 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3928560 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3928560' 00:20:59.086 killing process with pid 3928560 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@973 -- # kill 3928560 00:20:59.086 13:52:30 nvme_rpc -- common/autotest_common.sh@978 -- # wait 3928560 00:20:59.343 00:20:59.343 real 0m8.787s 00:20:59.343 user 0m16.122s 00:20:59.343 sys 0m1.163s 00:20:59.343 13:52:30 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.343 13:52:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.343 ************************************ 00:20:59.343 END TEST nvme_rpc 00:20:59.343 ************************************ 00:20:59.600 13:52:30 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:59.600 13:52:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:59.600 13:52:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.600 13:52:30 -- common/autotest_common.sh@10 -- # set +x 00:20:59.600 ************************************ 00:20:59.600 START TEST nvme_rpc_timeouts 00:20:59.600 ************************************ 00:20:59.600 13:52:30 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:59.600 * Looking for test storage... 00:20:59.600 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:20:59.600 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.600 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.600 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.600 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:20:59.600 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.601 13:52:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.601 --rc genhtml_branch_coverage=1 00:20:59.601 --rc genhtml_function_coverage=1 00:20:59.601 --rc genhtml_legend=1 00:20:59.601 --rc geninfo_all_blocks=1 00:20:59.601 --rc geninfo_unexecuted_blocks=1 00:20:59.601 00:20:59.601 ' 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.601 --rc genhtml_branch_coverage=1 00:20:59.601 --rc genhtml_function_coverage=1 00:20:59.601 --rc genhtml_legend=1 00:20:59.601 --rc geninfo_all_blocks=1 00:20:59.601 --rc geninfo_unexecuted_blocks=1 00:20:59.601 00:20:59.601 ' 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.601 --rc genhtml_branch_coverage=1 00:20:59.601 --rc genhtml_function_coverage=1 00:20:59.601 --rc genhtml_legend=1 00:20:59.601 --rc geninfo_all_blocks=1 00:20:59.601 --rc geninfo_unexecuted_blocks=1 00:20:59.601 00:20:59.601 ' 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.601 --rc genhtml_branch_coverage=1 00:20:59.601 --rc genhtml_function_coverage=1 00:20:59.601 --rc genhtml_legend=1 00:20:59.601 --rc geninfo_all_blocks=1 00:20:59.601 --rc geninfo_unexecuted_blocks=1 00:20:59.601 00:20:59.601 ' 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_3929792 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_3929792 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=3929827 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 3929827 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 3929827 ']' 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.601 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:59.601 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:20:59.858 [2024-12-05 13:52:31.167573] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:20:59.858 [2024-12-05 13:52:31.167655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929827 ] 00:20:59.858 [2024-12-05 13:52:31.287648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:59.858 [2024-12-05 13:52:31.345290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.858 [2024-12-05 13:52:31.345296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.116 [2024-12-05 13:52:31.568169] 'OCF_Core' volume operations registered 00:21:00.116 [2024-12-05 13:52:31.568209] 'OCF_Cache' volume operations registered 00:21:00.116 [2024-12-05 13:52:31.572624] 'OCF Composite' volume operations registered 00:21:00.116 [2024-12-05 13:52:31.577120] 'SPDK_block_device' volume operations registered 00:21:00.374 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.374 13:52:31 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:21:00.374 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:21:00.374 Checking default timeout settings: 00:21:00.374 13:52:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config 00:21:00.632 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:21:00.632 Making settings changes with rpc: 00:21:00.632 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:21:00.889 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:21:00.889 Check default vs. modified settings: 00:21:00.889 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config 00:21:01.456 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:21:01.457 Setting action_on_timeout is changed as expected. 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:21:01.457 Setting timeout_us is changed as expected. 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:21:01.457 Setting timeout_admin_us is changed as expected. 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_3929792 /tmp/settings_modified_3929792 00:21:01.457 13:52:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 3929827 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 3929827 ']' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 3929827 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3929827 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3929827' 00:21:01.457 killing process with pid 3929827 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 3929827 00:21:01.457 13:52:32 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 3929827 00:21:02.023 13:52:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:21:02.023 RPC TIMEOUT SETTING TEST PASSED. 00:21:02.023 00:21:02.023 real 0m2.494s 00:21:02.023 user 0m4.803s 00:21:02.023 sys 0m0.845s 00:21:02.023 13:52:33 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.023 13:52:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:02.023 ************************************ 00:21:02.023 END TEST nvme_rpc_timeouts 00:21:02.023 ************************************ 00:21:02.023 13:52:33 -- spdk/autotest.sh@239 -- # uname -s 00:21:02.023 13:52:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:21:02.023 13:52:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh 00:21:02.023 13:52:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.023 13:52:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.023 13:52:33 -- common/autotest_common.sh@10 -- # set +x 00:21:02.023 ************************************ 00:21:02.023 START TEST sw_hotplug 00:21:02.023 ************************************ 00:21:02.023 13:52:33 sw_hotplug -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh 00:21:02.282 * Looking for test storage... 00:21:02.282 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.282 13:52:33 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:02.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.282 --rc genhtml_branch_coverage=1 00:21:02.282 --rc genhtml_function_coverage=1 00:21:02.282 --rc genhtml_legend=1 00:21:02.282 --rc geninfo_all_blocks=1 00:21:02.282 --rc geninfo_unexecuted_blocks=1 00:21:02.282 00:21:02.282 ' 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:02.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.282 --rc genhtml_branch_coverage=1 00:21:02.282 --rc genhtml_function_coverage=1 00:21:02.282 --rc genhtml_legend=1 00:21:02.282 --rc geninfo_all_blocks=1 00:21:02.282 --rc geninfo_unexecuted_blocks=1 00:21:02.282 00:21:02.282 ' 00:21:02.282 13:52:33 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:02.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.282 --rc genhtml_branch_coverage=1 00:21:02.282 --rc genhtml_function_coverage=1 00:21:02.282 --rc genhtml_legend=1 00:21:02.282 --rc geninfo_all_blocks=1 00:21:02.283 --rc geninfo_unexecuted_blocks=1 00:21:02.283 00:21:02.283 ' 00:21:02.283 13:52:33 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:02.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.283 --rc genhtml_branch_coverage=1 00:21:02.283 --rc genhtml_function_coverage=1 00:21:02.283 --rc genhtml_legend=1 00:21:02.283 --rc geninfo_all_blocks=1 00:21:02.283 --rc geninfo_unexecuted_blocks=1 00:21:02.283 00:21:02.283 ' 00:21:02.283 13:52:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:21:05.595 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:05.595 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:06.163 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:21:06.163 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:21:06.163 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:21:06.163 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@233 -- # local class 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:21:06.163 13:52:37 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:d8:00.0 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:d8:00.0 ]] 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:d8:00.0 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@328 -- # (( 1 )) 00:21:06.422 13:52:37 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:d8:00.0 00:21:06.422 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:21:06.422 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:21:06.422 13:52:37 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:21:09.707 Waiting for block devices as requested 00:21:09.707 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:09.707 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:09.707 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:09.707 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:09.707 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:09.966 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:09.966 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:09.966 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:10.223 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:10.223 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:10.223 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:10.481 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:10.481 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:10.481 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:10.739 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:10.739 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:10.739 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:11.680 13:52:43 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:d8:00.0 00:21:11.680 13:52:43 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:21:14.967 0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7 00:21:14.967 0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6 00:21:14.967 0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5 00:21:14.967 0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4 00:21:14.967 0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3 00:21:14.967 0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2 00:21:14.967 0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1 00:21:14.967 0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0 00:21:14.967 0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7 00:21:14.967 0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6 00:21:14.967 0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5 00:21:14.967 0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4 00:21:14.967 0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3 00:21:14.967 0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2 00:21:14.967 0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1 00:21:14.967 0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0 00:21:18.277 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:21:19.214 13:52:50 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:21:19.214 13:52:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=3935967 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:22.611 13:52:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:21:22.611 13:52:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:21:22.611 13:52:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:21:22.611 13:52:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:21:22.611 13:52:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:21:22.611 13:52:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:21:22.611 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:22.611 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:22.611 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:21:22.611 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:22.611 13:52:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:21:22.889 Initializing NVMe Controllers 00:21:23.455 Attaching to 0000:d8:00.0 00:21:25.985 Attached to 0000:d8:00.0 00:21:25.985 Initialization complete. Starting I/O... 00:21:25.985 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 128 I/Os completed (+128) 00:21:25.985 00:21:26.552 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 2944 I/Os completed (+2816) 00:21:26.552 00:21:27.486 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 5760 I/Os completed (+2816) 00:21:27.486 00:21:28.859 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 8704 I/Os completed (+2944) 00:21:28.859 00:21:28.859 13:53:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:28.859 13:53:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:28.859 13:53:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:28.859 [2024-12-05 13:53:00.051821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:21:28.859 Controller removed: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:28.859 [2024-12-05 13:53:00.051885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.051908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.051923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.051937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 Controller removed: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:28.859 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:28.859 [2024-12-05 13:53:00.053138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.053166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.053182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.053198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.859 [2024-12-05 13:53:00.055280] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:21:28.859 13:53:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:28.860 13:53:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:29.795 00:21:29.795 13:53:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:29.795 13:53:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:21:29.795 13:53:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:21:30.740 00:21:31.673 00:21:32.606 00:21:32.865 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:21:32.865 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:32.865 13:53:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:21:33.802 Attaching to 0000:d8:00.0 00:21:33.802 [2024-12-05 13:53:05.161965] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:21:35.706 Attached to 0000:d8:00.0 00:21:35.706 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 0 I/Os completed (+0) 00:21:35.706 00:21:35.706 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 128 I/Os completed (+128) 00:21:35.706 00:21:35.965 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 256 I/Os completed (+128) 00:21:35.965 00:21:36.532 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 2304 I/Os completed (+2048) 00:21:36.532 00:21:37.908 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 5120 I/Os completed (+2816) 00:21:37.908 00:21:38.844 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 8064 I/Os completed (+2944) 00:21:38.844 00:21:39.103 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:39.103 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:39.103 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:39.103 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:39.103 [2024-12-05 13:53:10.396101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:21:39.103 Controller removed: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:39.103 [2024-12-05 13:53:10.396145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.396166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.396182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.396196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 Controller removed: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:39.103 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:39.103 [2024-12-05 13:53:10.397455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.397482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.397497] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.397512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:39.103 [2024-12-05 13:53:10.399601] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:21:39.103 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:39.103 13:53:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:39.669 00:21:39.926 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:39.926 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:21:39.926 13:53:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:21:40.493 00:21:41.870 00:21:42.806 00:21:43.375 13:53:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:21:43.375 13:53:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:43.375 13:53:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:21:44.313 Attaching to 0000:d8:00.0 00:21:44.313 [2024-12-05 13:53:15.473913] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:21:46.214 Attached to 0000:d8:00.0 00:21:46.214 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 0 I/Os completed (+0) 00:21:46.214 00:21:46.214 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 128 I/Os completed (+128) 00:21:46.214 00:21:46.214 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 256 I/Os completed (+128) 00:21:46.214 00:21:46.782 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 1536 I/Os completed (+1280) 00:21:46.782 00:21:47.717 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 4352 I/Os completed (+2816) 00:21:47.717 00:21:48.655 INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ): 7296 I/Os completed (+2944) 00:21:48.655 00:21:49.222 13:53:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:49.222 13:53:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:49.222 13:53:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:49.222 13:53:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:49.222 [2024-12-05 13:53:20.675739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:21:49.222 Controller removed: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:49.222 [2024-12-05 13:53:20.675777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.675799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.675814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.675829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 Controller removed: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:49.222 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:49.222 [2024-12-05 13:53:20.676984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.677008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.677024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.677038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:49.222 [2024-12-05 13:53:20.679012] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:21:49.222 13:53:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:49.222 13:53:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:49.789 00:21:50.358 13:53:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:50.358 13:53:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:21:50.358 13:53:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:21:50.618 00:21:51.555 00:21:52.933 00:21:53.499 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:21:53.499 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:53.499 13:53:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:21:54.448 Attaching to 0000:d8:00.0 00:21:54.448 [2024-12-05 13:53:25.785796] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:21:56.350 Attached to 0000:d8:00.0 00:21:56.350 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ8234018V4P0DGN ) 00:21:59.734 13:53:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:59.734 13:53:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:59.734 13:53:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=36.98 00:21:59.734 13:53:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 36.98 00:21:59.734 13:53:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:21:59.734 13:53:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=36.98 00:21:59.734 13:53:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 36.98 1 00:21:59.734 remove_attach_helper took 36.98s to complete (handling 1 nvme drive(s)) 13:53:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:21:59.992 [2024-12-05 13:53:31.440113] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:21:59.992 [2024-12-05 13:53:31.440156] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:22:06.554 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 3935967 00:22:06.554 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (3935967) - No such process 00:22:06.554 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 3935967 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=3940685 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 3940685 00:22:06.555 13:53:36 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 3940685 ']' 00:22:06.555 13:53:36 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.555 13:53:36 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.555 13:53:36 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.555 13:53:36 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.555 13:53:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:06.555 13:53:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:22:06.555 [2024-12-05 13:53:37.062819] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:22:06.555 [2024-12-05 13:53:37.062894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940685 ] 00:22:06.555 [2024-12-05 13:53:37.184015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.555 [2024-12-05 13:53:37.237522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.555 [2024-12-05 13:53:37.450998] 'OCF_Core' volume operations registered 00:22:06.555 [2024-12-05 13:53:37.451039] 'OCF_Cache' volume operations registered 00:22:06.555 [2024-12-05 13:53:37.455454] 'OCF Composite' volume operations registered 00:22:06.555 [2024-12-05 13:53:37.459912] 'SPDK_block_device' volume operations registered 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:06.555 13:53:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:06.555 13:53:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:13.125 [2024-12-05 13:53:43.734951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:22:13.125 [2024-12-05 13:53:43.735066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.125 [2024-12-05 13:53:43.735089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.125 [2024-12-05 13:53:43.735105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.125 [2024-12-05 13:53:43.735127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.125 [2024-12-05 13:53:43.735139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.125 [2024-12-05 13:53:43.735153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.125 [2024-12-05 13:53:43.735168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.125 [2024-12-05 13:53:43.735180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.125 [2024-12-05 13:53:43.735193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.125 [2024-12-05 13:53:43.735209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:13.125 [2024-12-05 13:53:43.735221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.125 [2024-12-05 13:53:43.735234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.125 [2024-12-05 13:53:43.773487] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:13.125 13:53:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.125 13:53:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:13.125 13:53:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:13.125 13:53:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:13.384 13:53:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:13.384 13:53:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:22:13.384 13:53:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:22:16.669 13:53:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:22:16.669 13:53:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:16.669 13:53:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:18.045 [2024-12-05 13:53:49.545909] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:23.313 13:53:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.313 13:53:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:23.313 13:53:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:22:23.313 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:23.314 [2024-12-05 13:53:54.157280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:22:23.314 [2024-12-05 13:53:54.157392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:23.314 [2024-12-05 13:53:54.157413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.314 [2024-12-05 13:53:54.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.314 [2024-12-05 13:53:54.157451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:23.314 [2024-12-05 13:53:54.157463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.314 [2024-12-05 13:53:54.157477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.314 [2024-12-05 13:53:54.157491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:23.314 [2024-12-05 13:53:54.157503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.314 [2024-12-05 13:53:54.157518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.314 [2024-12-05 13:53:54.157532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:23.314 [2024-12-05 13:53:54.157544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.314 [2024-12-05 13:53:54.157557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.314 [2024-12-05 13:53:54.193214] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:23.314 13:53:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.314 13:53:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:23.314 13:53:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:23.314 13:53:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:23.905 13:53:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:23.905 13:53:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:22:23.905 13:53:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:22:27.191 13:53:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:22:27.191 13:53:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:27.191 13:53:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:28.127 [2024-12-05 13:53:59.369951] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:33.396 13:54:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.396 13:54:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:33.396 13:54:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:33.396 [2024-12-05 13:54:04.683050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:22:33.396 [2024-12-05 13:54:04.683171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.396 [2024-12-05 13:54:04.683194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.396 [2024-12-05 13:54:04.683211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.396 [2024-12-05 13:54:04.683232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.396 [2024-12-05 13:54:04.683245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.396 [2024-12-05 13:54:04.683258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.396 [2024-12-05 13:54:04.683273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.396 [2024-12-05 13:54:04.683285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.396 [2024-12-05 13:54:04.683299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.396 [2024-12-05 13:54:04.683313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.396 [2024-12-05 13:54:04.683325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.396 [2024-12-05 13:54:04.683339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.396 [2024-12-05 13:54:04.721077] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:33.396 13:54:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.396 13:54:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:33.396 13:54:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:33.396 13:54:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:34.329 13:54:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:34.329 13:54:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:22:34.329 13:54:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:22:37.631 13:54:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:22:37.631 13:54:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:37.631 13:54:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:38.566 [2024-12-05 13:54:09.889945] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=37.48 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 37.48 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=37.48 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 37.48 1 00:22:43.834 remove_attach_helper took 37.48s to complete (handling 1 nvme drive(s)) 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:22:43.834 13:54:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:43.834 13:54:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:50.398 [2024-12-05 13:54:21.270846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:22:50.398 [2024-12-05 13:54:21.270956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:50.398 [2024-12-05 13:54:21.270977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.398 [2024-12-05 13:54:21.270999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.398 [2024-12-05 13:54:21.271021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:50.398 [2024-12-05 13:54:21.271033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.398 [2024-12-05 13:54:21.271047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.398 [2024-12-05 13:54:21.271061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:50.398 [2024-12-05 13:54:21.271074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.398 [2024-12-05 13:54:21.271087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.398 [2024-12-05 13:54:21.271101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:50.398 [2024-12-05 13:54:21.271113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.398 [2024-12-05 13:54:21.271126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.398 [2024-12-05 13:54:21.308268] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:50.398 13:54:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.398 13:54:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:50.398 13:54:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:50.398 13:54:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:50.964 13:54:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:50.964 13:54:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:22:50.964 13:54:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:22:54.425 13:54:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:22:54.425 13:54:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:54.425 13:54:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:22:54.991 [2024-12-05 13:54:26.481696] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:00.258 13:54:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.258 13:54:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:00.258 13:54:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:00.258 [2024-12-05 13:54:31.694562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:23:00.258 [2024-12-05 13:54:31.694675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:00.258 [2024-12-05 13:54:31.694697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.258 [2024-12-05 13:54:31.694713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.258 [2024-12-05 13:54:31.694734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:00.258 [2024-12-05 13:54:31.694746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.258 [2024-12-05 13:54:31.694760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.258 [2024-12-05 13:54:31.694775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:00.258 [2024-12-05 13:54:31.694787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.258 [2024-12-05 13:54:31.694801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.258 [2024-12-05 13:54:31.694815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:00.258 [2024-12-05 13:54:31.694827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.258 [2024-12-05 13:54:31.694840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.258 [2024-12-05 13:54:31.731160] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:00.258 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:00.259 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:00.259 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:00.259 13:54:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.259 13:54:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:00.259 13:54:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.259 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:00.259 13:54:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:01.635 13:54:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:01.635 13:54:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:23:01.635 13:54:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:23:04.919 13:54:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:23:04.919 13:54:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:04.919 13:54:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:23:05.485 [2024-12-05 13:54:36.905705] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:10.757 13:54:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.757 13:54:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:10.757 13:54:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:10.757 [2024-12-05 13:54:42.218805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:d8:00.0, 0] in failed state. 00:23:10.757 [2024-12-05 13:54:42.218916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:10.757 [2024-12-05 13:54:42.218939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.757 [2024-12-05 13:54:42.218955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.757 [2024-12-05 13:54:42.218977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:10.757 [2024-12-05 13:54:42.218989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.757 [2024-12-05 13:54:42.219004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.757 [2024-12-05 13:54:42.219018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:10.757 [2024-12-05 13:54:42.219030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.757 [2024-12-05 13:54:42.219044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.757 [2024-12-05 13:54:42.219058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:10.757 [2024-12-05 13:54:42.219070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.757 [2024-12-05 13:54:42.219083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.757 [2024-12-05 13:54:42.256785] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:10.757 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:10.757 13:54:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.757 13:54:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:10.757 13:54:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.017 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:11.017 13:54:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:11.974 13:54:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:11.974 13:54:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo vfio-pci 00:23:11.974 13:54:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:d8:00.0 00:23:15.437 13:54:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:d8:00.0 00:23:15.437 13:54:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:15.437 13:54:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:23:16.004 [2024-12-05 13:54:47.425692] memory.c:1451:vtophys_pci_device_added: *ERROR*: Cannot update DMA mapping, error 14 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=37.46 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 37.46 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=37.46 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 37.46 1 00:23:21.298 remove_attach_helper took 37.46s to complete (handling 1 nvme drive(s)) 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:23:21.298 13:54:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 3940685 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 3940685 ']' 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 3940685 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3940685 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3940685' 00:23:21.298 killing process with pid 3940685 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@973 -- # kill 3940685 00:23:21.298 13:54:52 sw_hotplug -- common/autotest_common.sh@978 -- # wait 3940685 00:23:25.485 [2024-12-05 13:54:56.555843] memory.c:1504:vtophys_pci_device_removed: *ERROR*: Cannot unmap DMA memory, error 22 00:23:25.485 13:54:56 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:23:28.775 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:28.775 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:29.342 00:23:29.342 real 2m27.321s 00:23:29.342 user 1m25.989s 00:23:29.342 sys 0m52.415s 00:23:29.342 13:55:00 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.342 13:55:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:29.342 ************************************ 00:23:29.342 END TEST sw_hotplug 00:23:29.342 ************************************ 00:23:29.342 13:55:00 -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]] 00:23:29.342 13:55:00 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:23:29.342 13:55:00 -- spdk/autotest.sh@252 -- # run_test nvme_interrupt /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/interrupt.sh 00:23:29.342 13:55:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:29.342 13:55:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.342 13:55:00 -- common/autotest_common.sh@10 -- # set +x 00:23:29.601 ************************************ 00:23:29.601 START TEST nvme_interrupt 00:23:29.601 ************************************ 00:23:29.602 13:55:00 nvme_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/interrupt.sh 00:23:29.602 * Looking for test storage... 00:23:29.602 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@345 -- # : 1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@365 -- # decimal 1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@353 -- # local d=1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@355 -- # echo 1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@366 -- # decimal 2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@353 -- # local d=2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@355 -- # echo 2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.602 13:55:01 nvme_interrupt -- scripts/common.sh@368 -- # return 0 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:29.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.602 --rc genhtml_branch_coverage=1 00:23:29.602 --rc genhtml_function_coverage=1 00:23:29.602 --rc genhtml_legend=1 00:23:29.602 --rc geninfo_all_blocks=1 00:23:29.602 --rc geninfo_unexecuted_blocks=1 00:23:29.602 00:23:29.602 ' 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:29.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.602 --rc genhtml_branch_coverage=1 00:23:29.602 --rc genhtml_function_coverage=1 00:23:29.602 --rc genhtml_legend=1 00:23:29.602 --rc geninfo_all_blocks=1 00:23:29.602 --rc geninfo_unexecuted_blocks=1 00:23:29.602 00:23:29.602 ' 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:29.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.602 --rc genhtml_branch_coverage=1 00:23:29.602 --rc genhtml_function_coverage=1 00:23:29.602 --rc genhtml_legend=1 00:23:29.602 --rc geninfo_all_blocks=1 00:23:29.602 --rc geninfo_unexecuted_blocks=1 00:23:29.602 00:23:29.602 ' 00:23:29.602 13:55:01 nvme_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:29.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.602 --rc genhtml_branch_coverage=1 00:23:29.602 --rc genhtml_function_coverage=1 00:23:29.602 --rc genhtml_legend=1 00:23:29.602 --rc geninfo_all_blocks=1 00:23:29.602 --rc geninfo_unexecuted_blocks=1 00:23:29.602 00:23:29.602 ' 00:23:29.602 13:55:01 nvme_interrupt -- nvme/interrupt.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/cgroups.sh@244 -- # check_cgroup 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/cgroups.sh@10 -- # echo 2 00:23:29.602 13:55:01 nvme_interrupt -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:23:29.602 13:55:01 nvme_interrupt -- nvme/interrupt.sh@11 -- # CPU_UTIL_INTR_THRESHOLD=10 00:23:29.602 13:55:01 nvme_interrupt -- nvme/interrupt.sh@12 -- # CPU_UTIL_POLL_THRESHOLD=95 00:23:29.602 13:55:01 nvme_interrupt -- nvme/interrupt.sh@19 -- # nvmes=($(nvme_in_userspace)) 00:23:29.861 13:55:01 nvme_interrupt -- nvme/interrupt.sh@19 -- # nvme_in_userspace 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@312 -- # local bdf bdfs 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@313 -- # local nvmes 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@298 -- # local bdf= 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@233 -- # local class 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@234 -- # local subclass 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@235 -- # local progif 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@236 -- # printf %02x 1 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@236 -- # class=01 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@237 -- # printf %02x 8 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@237 -- # subclass=08 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@238 -- # printf %02x 2 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@238 -- # progif=02 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@240 -- # hash lspci 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@245 -- # tr -d '"' 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@301 -- # pci_can_use 0000:d8:00.0 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@18 -- # local i 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@21 -- # [[ =~ 0000:d8:00.0 ]] 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@27 -- # return 0 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@302 -- # echo 0000:d8:00.0 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@323 -- # uname -s 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@328 -- # (( 1 )) 00:23:29.861 13:55:01 nvme_interrupt -- scripts/common.sh@329 -- # printf '%s\n' 0000:d8:00.0 00:23:29.861 13:55:01 nvme_interrupt -- nvme/interrupt.sh@20 -- # nvme=0000:d8:00.0 00:23:29.861 13:55:01 nvme_interrupt -- nvme/interrupt.sh@23 -- # [[ -e /sys/bus/pci/drivers/vfio-pci/0000:d8:00.0/vfio-dev ]] 00:23:29.861 13:55:01 nvme_interrupt -- nvme/interrupt.sh@84 -- # run_test nvme_pcie_intr_mode nvme_pcie_intr_mode 00:23:29.861 13:55:01 nvme_interrupt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:29.861 13:55:01 nvme_interrupt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.861 13:55:01 nvme_interrupt -- common/autotest_common.sh@10 -- # set +x 00:23:29.861 ************************************ 00:23:29.861 START TEST nvme_pcie_intr_mode 00:23:29.861 ************************************ 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@1129 -- # nvme_pcie_intr_mode 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@26 -- # local cpu_util_pre cpu_util_post cpu_util_io 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@27 -- # local bdevperfpy_pid 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@29 -- # bdevperf_pid=3951228 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@31 -- # waitforlisten 3951228 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf -z -q 1 -o 262144 -t 10 -w read -m 0x1 --interrupt-mode 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@835 -- # '[' -z 3951228 ']' 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.861 13:55:01 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@10 -- # set +x 00:23:29.861 [2024-12-05 13:55:01.310566] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:29.861 [2024-12-05 13:55:01.311889] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:29.861 [2024-12-05 13:55:01.311937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951228 ] 00:23:29.861 I/O size of 262144 is greater than zero copy threshold (65536). 00:23:29.861 Zero copy mechanism will not be used. 00:23:30.120 [2024-12-05 13:55:01.425702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.120 [2024-12-05 13:55:01.483601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.379 [2024-12-05 13:55:01.674085] 'OCF_Core' volume operations registered 00:23:30.379 [2024-12-05 13:55:01.674124] 'OCF_Cache' volume operations registered 00:23:30.379 [2024-12-05 13:55:01.678191] 'OCF Composite' volume operations registered 00:23:30.379 [2024-12-05 13:55:01.682303] 'SPDK_block_device' volume operations registered 00:23:30.379 [2024-12-05 13:55:01.683299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:30.947 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.948 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@868 -- # return 0 00:23:30.948 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@32 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:30.948 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@34 -- # bdev_nvme_attach_ctrlr 0000:d8:00.0 00:23:30.948 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@15 -- # rpc_cmd bdev_nvme_attach_controller --name Nvme0 --trtype PCIe --traddr 0000:d8:00.0 00:23:30.948 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.948 13:55:02 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@10 -- # set +x 00:23:34.234 Nvme0n1 00:23:34.234 13:55:05 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.234 13:55:05 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@35 -- # spdk_pid=3951228 00:23:34.234 13:55:05 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@35 -- # get_spdk_proc_time 5 0 00:23:34.234 13:55:05 nvme_interrupt.nvme_pcie_intr_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:23:34.234 13:55:05 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@10 -- # set +x 00:23:38.424 stime samples: 0 1 0 1 00:23:38.424 utime samples: 0 1 0 1 00:23:38.424 13:55:09 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@35 -- # cpu_util_pre=1 00:23:38.424 13:55:09 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@38 -- # bdevperfpy_pid=3952283 00:23:38.424 13:55:09 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@39 -- # sleep 1 00:23:38.424 13:55:09 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:38.424 [2024-12-05 13:55:09.315577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (Nvme0n1) to intr mode from intr mode. 00:23:38.424 I/O size of 262144 is greater than zero copy threshold (65536). 00:23:38.424 Zero copy mechanism will not be used. 00:23:38.424 Running I/O for 10 seconds... 00:23:38.682 13:55:10 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@41 -- # spdk_pid=3951228 00:23:38.682 13:55:10 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@41 -- # get_spdk_proc_time 8 0 00:23:38.682 13:55:10 nvme_interrupt.nvme_pcie_intr_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:23:38.682 13:55:10 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@10 -- # set +x 00:23:39.878 8025.00 IOPS, 2006.25 MiB/s [2024-12-05T12:55:12.341Z] 7872.00 IOPS, 1968.00 MiB/s [2024-12-05T12:55:13.717Z] 7898.00 IOPS, 1974.50 MiB/s [2024-12-05T12:55:14.651Z] 7938.25 IOPS, 1984.56 MiB/s [2024-12-05T12:55:15.585Z] 7893.80 IOPS, 1973.45 MiB/s [2024-12-05T12:55:16.520Z] 7857.50 IOPS, 1964.38 MiB/s [2024-12-05T12:55:17.455Z] 7826.29 IOPS, 1956.57 MiB/s [2024-12-05T12:55:17.455Z] stime samples: 0 10 10 9 9 9 9 00:23:45.929 utime samples: 0 62 55 56 60 61 63 00:23:45.929 13:55:17 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@41 -- # cpu_util_io=69 00:23:45.929 13:55:17 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@42 -- # wait 3952283 00:23:46.865 7812.75 IOPS, 1953.19 MiB/s [2024-12-05T12:55:19.328Z] 7801.33 IOPS, 1950.33 MiB/s [2024-12-05T12:55:19.328Z] 7784.90 IOPS, 1946.22 MiB/s 00:23:47.802 Latency(us) 00:23:47.802 [2024-12-05T12:55:19.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.802 Job: Nvme0n1 (Core Mask 0x1, workload: read, depth: 1, IO size: 262144) 00:23:47.802 Nvme0n1 : 9.98 7801.86 1950.47 0.00 0.00 116.86 103.29 3761.20 00:23:47.802 [2024-12-05T12:55:19.328Z] =================================================================================================================== 00:23:47.802 [2024-12-05T12:55:19.328Z] Total : 7801.86 1950.47 0.00 0.00 116.86 103.29 3761.20 00:23:47.802 { 00:23:47.802 "results": [ 00:23:47.802 { 00:23:47.802 "job": "Nvme0n1", 00:23:47.802 "core_mask": "0x1", 00:23:47.802 "workload": "read", 00:23:47.802 "status": "finished", 00:23:47.802 "queue_depth": 1, 00:23:47.802 "io_size": 262144, 00:23:47.802 "runtime": 9.978514, 00:23:47.802 "iops": 7801.863083020177, 00:23:47.802 "mibps": 1950.4657707550443, 00:23:47.802 "io_failed": 0, 00:23:47.802 "io_timeout": 0, 00:23:47.802 "avg_latency_us": 116.85511370941035, 00:23:47.802 "min_latency_us": 103.2904347826087, 00:23:47.802 "max_latency_us": 3761.1965217391303 00:23:47.802 } 00:23:47.802 ], 00:23:47.802 "core_count": 1 00:23:47.802 } 00:23:48.061 13:55:19 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@43 -- # spdk_pid=3951228 00:23:48.061 13:55:19 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@43 -- # get_spdk_proc_time 5 0 00:23:48.061 13:55:19 nvme_interrupt.nvme_pcie_intr_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:23:48.061 13:55:19 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@10 -- # set +x 00:23:52.251 stime samples: 0 0 0 1 00:23:52.251 utime samples: 0 1 1 1 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@43 -- # cpu_util_post=1 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@46 -- # killprocess 3951228 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@954 -- # '[' -z 3951228 ']' 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@958 -- # kill -0 3951228 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@959 -- # uname 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951228 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951228' 00:23:52.251 killing process with pid 3951228 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@973 -- # kill 3951228 00:23:52.251 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.251 00:23:52.251 Latency(us) 00:23:52.251 [2024-12-05T12:55:23.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.251 [2024-12-05T12:55:23.777Z] =================================================================================================================== 00:23:52.251 [2024-12-05T12:55:23.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.251 13:55:23 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@978 -- # wait 3951228 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@48 -- # cat 00:23:56.440 pre CPU util: 1 00:23:56.440 IO CPU util: 69 00:23:56.440 post CPU util: 1 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_intr_mode -- nvme/interrupt.sh@59 -- # (( cpu_util_pre < CPU_UTIL_INTR_THRESHOLD && cpu_util_post < CPU_UTIL_INTR_THRESHOLD )) 00:23:56.440 00:23:56.440 real 0m26.217s 00:23:56.440 user 0m7.314s 00:23:56.440 sys 0m1.846s 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_intr_mode -- common/autotest_common.sh@10 -- # set +x 00:23:56.440 ************************************ 00:23:56.440 END TEST nvme_pcie_intr_mode 00:23:56.440 ************************************ 00:23:56.440 13:55:27 nvme_interrupt -- nvme/interrupt.sh@85 -- # run_test nvme_pcie_poll_mode nvme_pcie_poll_mode 00:23:56.440 13:55:27 nvme_interrupt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:56.440 13:55:27 nvme_interrupt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.440 13:55:27 nvme_interrupt -- common/autotest_common.sh@10 -- # set +x 00:23:56.440 ************************************ 00:23:56.440 START TEST nvme_pcie_poll_mode 00:23:56.440 ************************************ 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@1129 -- # nvme_pcie_poll_mode 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@63 -- # local cpu_util 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@65 -- # bdevperf_pid=3954668 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@67 -- # waitforlisten 3954668 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf -z -q 1 -o 262144 -t 10 -w read -m 0x1 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@835 -- # '[' -z 3954668 ']' 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.440 13:55:27 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@10 -- # set +x 00:23:56.440 [2024-12-05 13:55:27.614794] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:23:56.440 [2024-12-05 13:55:27.614870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954668 ] 00:23:56.440 I/O size of 262144 is greater than zero copy threshold (65536). 00:23:56.440 Zero copy mechanism will not be used. 00:23:56.440 [2024-12-05 13:55:27.736821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.440 [2024-12-05 13:55:27.796037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.699 [2024-12-05 13:55:27.984003] 'OCF_Core' volume operations registered 00:23:56.699 [2024-12-05 13:55:27.984040] 'OCF_Cache' volume operations registered 00:23:56.699 [2024-12-05 13:55:27.988444] 'OCF Composite' volume operations registered 00:23:56.699 [2024-12-05 13:55:27.992928] 'SPDK_block_device' volume operations registered 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@868 -- # return 0 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@68 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@70 -- # bdev_nvme_attach_ctrlr 0000:d8:00.0 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@15 -- # rpc_cmd bdev_nvme_attach_controller --name Nvme0 --trtype PCIe --traddr 0000:d8:00.0 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.699 13:55:28 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@10 -- # set +x 00:24:00.012 Nvme0n1 00:24:00.012 13:55:30 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.012 13:55:30 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@72 -- # sleep 1 00:24:00.012 13:55:30 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@71 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:00.012 I/O size of 262144 is greater than zero copy threshold (65536). 00:24:00.012 Zero copy mechanism will not be used. 00:24:00.012 Running I/O for 10 seconds... 00:24:00.578 13:55:31 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@74 -- # spdk_pid=3954668 00:24:00.578 13:55:31 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@74 -- # get_spdk_proc_time 8 0 00:24:00.578 13:55:31 nvme_interrupt.nvme_pcie_poll_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:24:00.578 13:55:31 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@10 -- # set +x 00:24:01.511 7863.00 IOPS, 1965.75 MiB/s [2024-12-05T12:55:34.414Z] 7852.50 IOPS, 1963.12 MiB/s [2024-12-05T12:55:34.981Z] 7852.33 IOPS, 1963.08 MiB/s [2024-12-05T12:55:36.359Z] 7849.75 IOPS, 1962.44 MiB/s [2024-12-05T12:55:37.295Z] 7848.60 IOPS, 1962.15 MiB/s [2024-12-05T12:55:38.229Z] 7848.67 IOPS, 1962.17 MiB/s [2024-12-05T12:55:39.163Z] 7848.14 IOPS, 1962.04 MiB/s [2024-12-05T12:55:39.163Z] stime samples: 0 1 0 0 0 0 1 00:24:07.637 utime samples: 0 100 99 100 100 99 99 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@74 -- # cpu_util=100 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@77 -- # killprocess 3954668 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@954 -- # '[' -z 3954668 ']' 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@958 -- # kill -0 3954668 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@959 -- # uname 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954668 00:24:07.637 7848.75 IOPS, 1962.19 MiB/s [2024-12-05T12:55:39.163Z] 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954668' 00:24:07.637 killing process with pid 3954668 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@973 -- # kill 3954668 00:24:07.637 13:55:38 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@978 -- # wait 3954668 00:24:07.637 Received shutdown signal, test time was about 8.008165 seconds 00:24:07.637 00:24:07.637 Latency(us) 00:24:07.637 [2024-12-05T12:55:39.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.637 Job: Nvme0n1 (Core Mask 0x1, workload: read, depth: 1, IO size: 262144) 00:24:07.637 Nvme0n1 : 8.01 7846.81 1961.70 0.00 0.00 119.67 117.54 2094.30 00:24:07.637 [2024-12-05T12:55:39.163Z] =================================================================================================================== 00:24:07.637 [2024-12-05T12:55:39.163Z] Total : 7846.81 1961.70 0.00 0.00 119.67 117.54 2094.30 00:24:07.637 { 00:24:07.637 "results": [ 00:24:07.637 { 00:24:07.637 "job": "Nvme0n1", 00:24:07.637 "core_mask": "0x1", 00:24:07.637 "workload": "read", 00:24:07.637 "status": "terminated", 00:24:07.637 "queue_depth": 1, 00:24:07.637 "io_size": 262144, 00:24:07.637 "runtime": 8.007203, 00:24:07.637 "iops": 7846.809928510617, 00:24:07.637 "mibps": 1961.7024821276543, 00:24:07.637 "io_failed": 0, 00:24:07.637 "io_timeout": 0, 00:24:07.637 "avg_latency_us": 119.67121209206476, 00:24:07.637 "min_latency_us": 117.53739130434782, 00:24:07.637 "max_latency_us": 2094.302608695652 00:24:07.637 } 00:24:07.637 ], 00:24:07.637 "core_count": 1 00:24:07.637 } 00:24:11.827 13:55:42 nvme_interrupt.nvme_pcie_poll_mode -- nvme/interrupt.sh@79 -- # (( cpu_util < CPU_UTIL_POLL_THRESHOLD )) 00:24:11.827 00:24:11.827 real 0m15.433s 00:24:11.827 user 0m14.298s 00:24:11.827 sys 0m0.797s 00:24:11.827 13:55:42 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.827 13:55:42 nvme_interrupt.nvme_pcie_poll_mode -- common/autotest_common.sh@10 -- # set +x 00:24:11.827 ************************************ 00:24:11.827 END TEST nvme_pcie_poll_mode 00:24:11.827 ************************************ 00:24:11.827 00:24:11.827 real 0m42.137s 00:24:11.827 user 0m21.830s 00:24:11.827 sys 0m2.944s 00:24:11.827 13:55:43 nvme_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.827 13:55:43 nvme_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:11.827 ************************************ 00:24:11.827 END TEST nvme_interrupt 00:24:11.827 ************************************ 00:24:11.827 13:55:43 -- spdk/autotest.sh@256 -- # '[' 1 -eq 1 ']' 00:24:11.827 13:55:43 -- spdk/autotest.sh@257 -- # run_test ioat /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh 00:24:11.827 13:55:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:11.827 13:55:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.827 13:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:11.827 ************************************ 00:24:11.827 START TEST ioat 00:24:11.827 ************************************ 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh 00:24:11.827 * Looking for test storage... 00:24:11.827 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.827 13:55:43 ioat -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.827 13:55:43 ioat -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.827 13:55:43 ioat -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.827 13:55:43 ioat -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.827 13:55:43 ioat -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.827 13:55:43 ioat -- scripts/common.sh@344 -- # case "$op" in 00:24:11.827 13:55:43 ioat -- scripts/common.sh@345 -- # : 1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.827 13:55:43 ioat -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.827 13:55:43 ioat -- scripts/common.sh@365 -- # decimal 1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@353 -- # local d=1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.827 13:55:43 ioat -- scripts/common.sh@355 -- # echo 1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.827 13:55:43 ioat -- scripts/common.sh@366 -- # decimal 2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@353 -- # local d=2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.827 13:55:43 ioat -- scripts/common.sh@355 -- # echo 2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.827 13:55:43 ioat -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.827 13:55:43 ioat -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.827 13:55:43 ioat -- scripts/common.sh@368 -- # return 0 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.827 --rc genhtml_branch_coverage=1 00:24:11.827 --rc genhtml_function_coverage=1 00:24:11.827 --rc genhtml_legend=1 00:24:11.827 --rc geninfo_all_blocks=1 00:24:11.827 --rc geninfo_unexecuted_blocks=1 00:24:11.827 00:24:11.827 ' 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.827 --rc genhtml_branch_coverage=1 00:24:11.827 --rc genhtml_function_coverage=1 00:24:11.827 --rc genhtml_legend=1 00:24:11.827 --rc geninfo_all_blocks=1 00:24:11.827 --rc geninfo_unexecuted_blocks=1 00:24:11.827 00:24:11.827 ' 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.827 --rc genhtml_branch_coverage=1 00:24:11.827 --rc genhtml_function_coverage=1 00:24:11.827 --rc genhtml_legend=1 00:24:11.827 --rc geninfo_all_blocks=1 00:24:11.827 --rc geninfo_unexecuted_blocks=1 00:24:11.827 00:24:11.827 ' 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:11.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.827 --rc genhtml_branch_coverage=1 00:24:11.827 --rc genhtml_function_coverage=1 00:24:11.827 --rc genhtml_legend=1 00:24:11.827 --rc geninfo_all_blocks=1 00:24:11.827 --rc geninfo_unexecuted_blocks=1 00:24:11.827 00:24:11.827 ' 00:24:11.827 13:55:43 ioat -- ioat/ioat.sh@10 -- # run_test ioat_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1 00:24:11.827 13:55:43 ioat -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:11.828 13:55:43 ioat -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.828 13:55:43 ioat -- common/autotest_common.sh@10 -- # set +x 00:24:11.828 ************************************ 00:24:11.828 START TEST ioat_perf 00:24:11.828 ************************************ 00:24:11.828 13:55:43 ioat.ioat_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1 00:24:13.748 [2024-12-05 13:55:45.017947] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018016] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018030] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018041] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018053] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018064] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018075] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018086] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018098] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018109] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018119] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018130] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018140] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018152] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018164] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown! 00:24:13.748 [2024-12-05 13:55:45.018175] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown! 00:24:13.748 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021 00:24:13.748 Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021 00:24:13.748 User configuration: 00:24:13.748 Number of channels: 1 00:24:13.748 Transfer size: 4096 bytes 00:24:13.748 Queue depth: 256 00:24:13.748 Run time: 1 seconds 00:24:13.748 Core mask: 0x1 00:24:13.748 Verify: No 00:24:13.748 00:24:13.748 Associating ioat_channel 0 with core 0 00:24:13.748 Starting thread on core 0 00:24:13.748 Channel_ID Core Transfers Bandwidth Failed 00:24:13.748 ----------------------------------------------------------- 00:24:13.748 0 0 690560/s 2697 MiB/s 0 00:24:13.748 =========================================================== 00:24:13.748 Total: 690560/s 2697 MiB/s 0 00:24:13.748 00:24:13.748 real 0m1.690s 00:24:13.748 user 0m1.353s 00:24:13.748 sys 0m0.146s 00:24:13.748 13:55:45 ioat.ioat_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.748 13:55:45 ioat.ioat_perf -- common/autotest_common.sh@10 -- # set +x 00:24:13.748 ************************************ 00:24:13.748 END TEST ioat_perf 00:24:13.748 ************************************ 00:24:13.748 13:55:45 ioat -- ioat/ioat.sh@12 -- # run_test ioat_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1 00:24:13.748 13:55:45 ioat -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:13.748 13:55:45 ioat -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.748 13:55:45 ioat -- common/autotest_common.sh@10 -- # set +x 00:24:13.748 ************************************ 00:24:13.748 START TEST ioat_verify 00:24:13.748 ************************************ 00:24:13.748 13:55:45 ioat.ioat_verify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1 00:24:15.652 [2024-12-05 13:55:46.837870] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.837966] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.837980] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.837992] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838002] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838013] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838023] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838034] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838046] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838056] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838067] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838077] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838088] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838100] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838111] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown! 00:24:15.652 [2024-12-05 13:55:46.838122] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown! 00:24:15.652 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021 00:24:15.652 Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021 00:24:15.652 User configuration: 00:24:15.652 Run time: 1 seconds 00:24:15.652 Core mask: 0x1 00:24:15.652 Queue depth: 32 00:24:15.652 lcore = 0, copy success = 543, copy failed = 0, fill success = 544, fill failed = 0 00:24:15.652 00:24:15.652 real 0m1.749s 00:24:15.652 user 0m1.383s 00:24:15.652 sys 0m0.173s 00:24:15.652 13:55:46 ioat.ioat_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.652 13:55:46 ioat.ioat_verify -- common/autotest_common.sh@10 -- # set +x 00:24:15.652 ************************************ 00:24:15.652 END TEST ioat_verify 00:24:15.652 ************************************ 00:24:15.652 00:24:15.652 real 0m3.759s 00:24:15.652 user 0m2.884s 00:24:15.652 sys 0m0.515s 00:24:15.652 13:55:46 ioat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.652 13:55:46 ioat -- common/autotest_common.sh@10 -- # set +x 00:24:15.652 ************************************ 00:24:15.652 END TEST ioat 00:24:15.652 ************************************ 00:24:15.652 13:55:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:15.652 13:55:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.652 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:24:15.652 13:55:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:24:15.652 13:55:46 -- spdk/autotest.sh@339 -- # run_test ocf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh 00:24:15.652 13:55:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.652 13:55:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.652 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:24:15.652 ************************************ 00:24:15.652 START TEST ocf 00:24:15.652 ************************************ 00:24:15.652 13:55:47 ocf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh 00:24:15.652 * Looking for test storage... 00:24:15.652 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf 00:24:15.652 13:55:47 ocf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:15.652 13:55:47 ocf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:15.652 13:55:47 ocf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:15.911 13:55:47 ocf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:15.911 13:55:47 ocf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.911 13:55:47 ocf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.911 13:55:47 ocf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.911 13:55:47 ocf -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.911 13:55:47 ocf -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.911 13:55:47 ocf -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.911 13:55:47 ocf -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.912 13:55:47 ocf -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.912 13:55:47 ocf -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.912 13:55:47 ocf -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.912 13:55:47 ocf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.912 13:55:47 ocf -- scripts/common.sh@344 -- # case "$op" in 00:24:15.912 13:55:47 ocf -- scripts/common.sh@345 -- # : 1 00:24:15.912 13:55:47 ocf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.912 13:55:47 ocf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.912 13:55:47 ocf -- scripts/common.sh@365 -- # decimal 1 00:24:15.912 13:55:47 ocf -- scripts/common.sh@353 -- # local d=1 00:24:15.912 13:55:47 ocf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.912 13:55:47 ocf -- scripts/common.sh@355 -- # echo 1 00:24:15.912 13:55:47 ocf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.912 13:55:47 ocf -- scripts/common.sh@366 -- # decimal 2 00:24:15.912 13:55:47 ocf -- scripts/common.sh@353 -- # local d=2 00:24:15.912 13:55:47 ocf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.912 13:55:47 ocf -- scripts/common.sh@355 -- # echo 2 00:24:15.912 13:55:47 ocf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.912 13:55:47 ocf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.912 13:55:47 ocf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.912 13:55:47 ocf -- scripts/common.sh@368 -- # return 0 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.912 --rc genhtml_branch_coverage=1 00:24:15.912 --rc genhtml_function_coverage=1 00:24:15.912 --rc genhtml_legend=1 00:24:15.912 --rc geninfo_all_blocks=1 00:24:15.912 --rc geninfo_unexecuted_blocks=1 00:24:15.912 00:24:15.912 ' 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.912 --rc genhtml_branch_coverage=1 00:24:15.912 --rc genhtml_function_coverage=1 00:24:15.912 --rc genhtml_legend=1 00:24:15.912 --rc geninfo_all_blocks=1 00:24:15.912 --rc geninfo_unexecuted_blocks=1 00:24:15.912 00:24:15.912 ' 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.912 --rc genhtml_branch_coverage=1 00:24:15.912 --rc genhtml_function_coverage=1 00:24:15.912 --rc genhtml_legend=1 00:24:15.912 --rc geninfo_all_blocks=1 00:24:15.912 --rc geninfo_unexecuted_blocks=1 00:24:15.912 00:24:15.912 ' 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:15.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.912 --rc genhtml_branch_coverage=1 00:24:15.912 --rc genhtml_function_coverage=1 00:24:15.912 --rc genhtml_legend=1 00:24:15.912 --rc geninfo_all_blocks=1 00:24:15.912 --rc geninfo_unexecuted_blocks=1 00:24:15.912 00:24:15.912 ' 00:24:15.912 13:55:47 ocf -- ocf/ocf.sh@11 -- # run_test ocf_fio_modes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.912 13:55:47 ocf -- common/autotest_common.sh@10 -- # set +x 00:24:15.912 ************************************ 00:24:15.912 START TEST ocf_fio_modes 00:24:15.912 ************************************ 00:24:15.912 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh 00:24:15.912 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:15.912 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1711 -- # lcov --version 00:24:15.912 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@344 -- # case "$op" in 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@345 -- # : 1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@365 -- # decimal 1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@353 -- # local d=1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@355 -- # echo 1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@366 -- # decimal 2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@353 -- # local d=2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@355 -- # echo 2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- scripts/common.sh@368 -- # return 0 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.171 --rc genhtml_branch_coverage=1 00:24:16.171 --rc genhtml_function_coverage=1 00:24:16.171 --rc genhtml_legend=1 00:24:16.171 --rc geninfo_all_blocks=1 00:24:16.171 --rc geninfo_unexecuted_blocks=1 00:24:16.171 00:24:16.171 ' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.171 --rc genhtml_branch_coverage=1 00:24:16.171 --rc genhtml_function_coverage=1 00:24:16.171 --rc genhtml_legend=1 00:24:16.171 --rc geninfo_all_blocks=1 00:24:16.171 --rc geninfo_unexecuted_blocks=1 00:24:16.171 00:24:16.171 ' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.171 --rc genhtml_branch_coverage=1 00:24:16.171 --rc genhtml_function_coverage=1 00:24:16.171 --rc genhtml_legend=1 00:24:16.171 --rc geninfo_all_blocks=1 00:24:16.171 --rc geninfo_unexecuted_blocks=1 00:24:16.171 00:24:16.171 ' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.171 --rc genhtml_branch_coverage=1 00:24:16.171 --rc genhtml_function_coverage=1 00:24:16.171 --rc genhtml_legend=1 00:24:16.171 --rc geninfo_all_blocks=1 00:24:16.171 --rc geninfo_unexecuted_blocks=1 00:24:16.171 00:24:16.171 ' 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- ocf/common.sh@9 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- integrity/fio-modes.sh@20 -- # clear_nvme 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- ocf/common.sh@12 -- # mapfile -t bdf 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- ocf/common.sh@12 -- # get_first_nvme_bdf 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1509 -- # local bdfs 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1498 -- # local bdfs 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:16.171 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:16.172 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:16.172 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:24:16.172 13:55:47 ocf.ocf_fio_modes -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:24:16.172 13:55:47 ocf.ocf_fio_modes -- ocf/common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:24:19.461 Waiting for block devices as requested 00:24:19.461 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:19.720 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:19.720 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:19.720 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:19.978 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:19.978 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:19.978 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:20.237 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:20.237 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:20.237 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:20.495 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:20.495 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:20.495 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:20.754 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:20.754 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:20.754 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:21.012 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:24:21.945 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@17 -- # get_nvme_name_from_bdf 0000:d8:00.0 00:24:21.945 13:55:53 ocf.ocf_fio_modes -- common/autotest_common.sh@1483 -- # get_block_dev_from_nvme 0000:d8:00.0 00:24:21.945 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@527 -- # local bdf=0000:d8:00.0 block ctrl sub 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@529 -- # for ctrl in /sys/class/nvme/nvme* 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@530 -- # [[ -e /sys/class/nvme/nvme0/address ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@530 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@531 -- # sub='nqn.2014.08.org.nvmexpress:80868086BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 ' 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@531 -- # break 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@534 -- # [[ -n nqn.2014.08.org.nvmexpress:80868086BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@536 -- # for block in /sys/block/nvme* 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@537 -- # [[ -e /sys/block/nvme0n1/hidden ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@537 -- # [[ 0 == 1 ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@538 -- # [[ -e /sys/block/nvme0n1/device/subsysnqn ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@538 -- # [[ nqn.2014.08.org.nvmexpress:80868086BTLJ8234018V4P0DGN INTEL SSDPE2KX040T8 == \n\q\n\.\2\0\1\4\.\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\:\8\0\8\6\8\0\8\6\B\T\L\J\8\2\3\4\0\1\8\V\4\P\0\D\G\N\ \ \I\N\T\E\L\ \S\S\D\P\E\2\K\X\0\4\0\T\8\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ]] 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- scripts/common.sh@539 -- # echo nvme0n1 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@17 -- # name=nvme0n1 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@18 -- # lsblk /dev/nvme0n1 --output MOUNTPOINT -n 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@18 -- # wc -w 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@18 -- # mountpoints=0 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@19 -- # '[' 0 '!=' 0 ']' 00:24:21.946 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@22 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1000 oflag=direct 00:24:22.513 1000+0 records in 00:24:22.513 1000+0 records out 00:24:22.513 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.476944 s, 2.2 GB/s 00:24:22.513 13:55:53 ocf.ocf_fio_modes -- ocf/common.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:24:25.803 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:25.803 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:26.063 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:29.352 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:24:30.287 13:56:01 ocf.ocf_fio_modes -- integrity/fio-modes.sh@22 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:30.287 13:56:01 ocf.ocf_fio_modes -- integrity/fio-modes.sh@25 -- # xtrace_disable 00:24:30.287 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@10 -- # set +x 00:24:30.547 { 00:24:30.547 "subsystems": [ 00:24:30.547 { 00:24:30.547 "subsystem": "bdev", 00:24:30.547 "config": [ 00:24:30.547 { 00:24:30.547 "method": "bdev_nvme_attach_controller", 00:24:30.547 "params": { 00:24:30.547 "trtype": "PCIe", 00:24:30.547 "name": "Nvme0", 00:24:30.547 "traddr": "0000:d8:00.0" 00:24:30.547 } 00:24:30.547 }, 00:24:30.547 { 00:24:30.547 "method": "bdev_split_create", 00:24:30.547 "params": { 00:24:30.547 "base_bdev": "Nvme0n1", 00:24:30.547 "split_count": 8, 00:24:30.547 "split_size_mb": 101 00:24:30.547 } 00:24:30.547 }, 00:24:30.547 { 00:24:30.547 "method": "bdev_ocf_create", 00:24:30.547 "params": { 00:24:30.547 "name": "PT_Nvme", 00:24:30.547 "mode": "pt", 00:24:30.547 "cache_bdev_name": "Nvme0n1p0", 00:24:30.547 "core_bdev_name": "Nvme0n1p1" 00:24:30.547 } 00:24:30.547 }, 00:24:30.547 { 00:24:30.547 "method": "bdev_ocf_create", 00:24:30.547 "params": { 00:24:30.547 "name": "WT_Nvme", 00:24:30.547 "mode": "wt", 00:24:30.547 "cache_bdev_name": "Nvme0n1p2", 00:24:30.547 "core_bdev_name": "Nvme0n1p3" 00:24:30.547 } 00:24:30.547 }, 00:24:30.547 { 00:24:30.547 "method": "bdev_ocf_create", 00:24:30.547 "params": { 00:24:30.547 "name": "WB_Nvme0", 00:24:30.547 "mode": "wb", 00:24:30.547 "cache_bdev_name": "Nvme0n1p4", 00:24:30.547 "core_bdev_name": "Nvme0n1p5" 00:24:30.547 } 00:24:30.547 }, 00:24:30.547 { 00:24:30.547 "method": "bdev_ocf_create", 00:24:30.547 "params": { 00:24:30.547 "name": "WB_Nvme1", 00:24:30.547 "mode": "wb", 00:24:30.547 "cache_bdev_name": "Nvme0n1p6", 00:24:30.547 "core_bdev_name": "Nvme0n1p7" 00:24:30.547 } 00:24:30.547 }, 00:24:30.547 { 00:24:30.547 "method": "bdev_wait_for_examine" 00:24:30.547 } 00:24:30.547 ] 00:24:30.547 } 00:24:30.547 ] 00:24:30.547 } 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- integrity/fio-modes.sh@100 -- # fio_verify --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- integrity/fio-modes.sh@12 -- # fio_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1345 -- # shift 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev 00:24:30.547 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # grep libasan 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:30.548 13:56:01 ocf.ocf_fio_modes -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:24:30.807 randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:30.807 randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:30.807 write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:30.807 rw: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:30.807 randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:30.807 fio-3.35 00:24:30.807 Starting 5 threads 00:24:45.712 00:24:45.712 randwrite: (groupid=0, jobs=5): err= 0: pid=3961422: Thu Dec 5 13:56:16 2024 00:24:45.712 read: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(824MiB/10011msec) 00:24:45.712 slat (usec): min=4, max=403, avg=29.80, stdev=24.79 00:24:45.712 clat (usec): min=55, max=25931, avg=7611.87, stdev=3749.53 00:24:45.712 lat (usec): min=93, max=25957, avg=7641.67, stdev=3749.91 00:24:45.712 clat percentiles (usec): 00:24:45.712 | 1.00th=[ 437], 5.00th=[ 930], 10.00th=[ 1926], 20.00th=[ 4359], 00:24:45.712 | 30.00th=[ 5800], 40.00th=[ 6915], 50.00th=[ 7898], 60.00th=[ 8717], 00:24:45.712 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12125], 95.00th=[13566], 00:24:45.712 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19792], 99.95th=[20841], 00:24:45.712 | 99.99th=[22676] 00:24:45.713 bw ( KiB/s): min= 4175, max=37888, per=28.16%, avg=23741.01, stdev=3345.19, samples=84 00:24:45.713 iops : min= 1043, max= 9472, avg=5935.21, stdev=836.31, samples=84 00:24:45.713 write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(679MiB/9981msec); 0 zone resets 00:24:45.713 slat (usec): min=8, max=342, avg=32.96, stdev=22.63 00:24:45.713 clat (usec): min=47, max=87181, avg=9458.54, stdev=8536.56 00:24:45.713 lat (usec): min=68, max=87207, avg=9491.49, stdev=8541.28 00:24:45.713 clat percentiles (usec): 00:24:45.713 | 1.00th=[ 93], 5.00th=[ 127], 10.00th=[ 182], 20.00th=[ 693], 00:24:45.713 | 30.00th=[ 2900], 40.00th=[ 6521], 50.00th=[ 8979], 60.00th=[10945], 00:24:45.713 | 70.00th=[13173], 80.00th=[15401], 90.00th=[19006], 95.00th=[23462], 00:24:45.713 | 99.00th=[39060], 99.50th=[45876], 99.90th=[62129], 99.95th=[67634], 00:24:45.713 | 99.99th=[73925] 00:24:45.713 bw ( KiB/s): min=34311, max=108144, per=99.44%, avg=69281.60, stdev=4176.32, samples=95 00:24:45.713 iops : min= 8577, max=27036, avg=17320.34, stdev=1044.10, samples=95 00:24:45.713 lat (usec) : 50=0.01%, 100=0.78%, 250=6.28%, 500=1.97%, 750=2.18% 00:24:45.713 lat (usec) : 1000=2.12% 00:24:45.713 lat (msec) : 2=4.75%, 4=6.56%, 10=40.99%, 20=30.51%, 50=3.71% 00:24:45.713 lat (msec) : 100=0.15% 00:24:45.713 cpu : usr=99.54%, sys=0.01%, ctx=306, majf=0, minf=558 00:24:45.713 IO depths : 1=5.4%, 2=5.2%, 4=5.2%, 8=7.5%, 16=9.8%, 32=18.4%, >=64=48.5% 00:24:45.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.713 complete : 0=0.0%, 4=97.5%, 8=0.6%, 16=0.4%, 32=0.6%, 64=0.5%, >=64=0.3% 00:24:45.713 issued rwts: total=210989,173849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.713 00:24:45.713 Run status group 0 (all jobs): 00:24:45.713 READ: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=824MiB (864MB), run=10011-10011msec 00:24:45.713 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=679MiB (712MB), run=9981-9981msec 00:24:50.993 13:56:21 ocf.ocf_fio_modes -- integrity/fio-modes.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:24:50.993 13:56:21 ocf.ocf_fio_modes -- integrity/fio-modes.sh@103 -- # cleanup 00:24:50.993 13:56:21 ocf.ocf_fio_modes -- integrity/fio-modes.sh@16 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf 00:24:50.993 00:24:50.993 real 0m34.665s 00:24:50.993 user 1m8.890s 00:24:50.993 sys 0m7.925s 00:24:50.993 13:56:21 ocf.ocf_fio_modes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.993 13:56:21 ocf.ocf_fio_modes -- common/autotest_common.sh@10 -- # set +x 00:24:50.993 ************************************ 00:24:50.993 END TEST ocf_fio_modes 00:24:50.993 ************************************ 00:24:50.993 13:56:21 ocf -- ocf/ocf.sh@12 -- # run_test ocf_bdevperf_iotypes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh 00:24:50.993 13:56:21 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:50.993 13:56:21 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.993 13:56:21 ocf -- common/autotest_common.sh@10 -- # set +x 00:24:50.993 ************************************ 00:24:50.993 START TEST ocf_bdevperf_iotypes 00:24:50.993 ************************************ 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1711 -- # lcov --version 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@344 -- # case "$op" in 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@345 -- # : 1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@365 -- # decimal 1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@353 -- # local d=1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@355 -- # echo 1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@366 -- # decimal 2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@353 -- # local d=2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@355 -- # echo 2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- scripts/common.sh@368 -- # return 0 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.993 --rc genhtml_branch_coverage=1 00:24:50.993 --rc genhtml_function_coverage=1 00:24:50.993 --rc genhtml_legend=1 00:24:50.993 --rc geninfo_all_blocks=1 00:24:50.993 --rc geninfo_unexecuted_blocks=1 00:24:50.993 00:24:50.993 ' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.993 --rc genhtml_branch_coverage=1 00:24:50.993 --rc genhtml_function_coverage=1 00:24:50.993 --rc genhtml_legend=1 00:24:50.993 --rc geninfo_all_blocks=1 00:24:50.993 --rc geninfo_unexecuted_blocks=1 00:24:50.993 00:24:50.993 ' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.993 --rc genhtml_branch_coverage=1 00:24:50.993 --rc genhtml_function_coverage=1 00:24:50.993 --rc genhtml_legend=1 00:24:50.993 --rc geninfo_all_blocks=1 00:24:50.993 --rc geninfo_unexecuted_blocks=1 00:24:50.993 00:24:50.993 ' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:50.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.993 --rc genhtml_branch_coverage=1 00:24:50.993 --rc genhtml_function_coverage=1 00:24:50.993 --rc genhtml_legend=1 00:24:50.993 --rc geninfo_all_blocks=1 00:24:50.993 --rc geninfo_unexecuted_blocks=1 00:24:50.993 00:24:50.993 ' 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@13 -- # gen_malloc_ocf_json 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@2 -- # local size=300 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w flush 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@3 -- # local block_size=512 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@4 -- # local config 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:24:50.993 { 00:24:50.993 "method": "bdev_malloc_create", 00:24:50.993 "params": { 00:24:50.993 "name": "Malloc$malloc", 00:24:50.993 "num_blocks": $(( (size << 20) / block_size )), 00:24:50.993 "block_size": 512 00:24:50.993 } 00:24:50.993 } 00:24:50.993 JSON 00:24:50.993 )") 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:24:50.993 { 00:24:50.993 "method": "bdev_malloc_create", 00:24:50.993 "params": { 00:24:50.993 "name": "Malloc$malloc", 00:24:50.993 "num_blocks": $(( (size << 20) / block_size )), 00:24:50.993 "block_size": 512 00:24:50.993 } 00:24:50.993 } 00:24:50.993 JSON 00:24:50.993 )") 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:24:50.993 { 00:24:50.993 "method": "bdev_malloc_create", 00:24:50.993 "params": { 00:24:50.993 "name": "Malloc$malloc", 00:24:50.993 "num_blocks": $(( (size << 20) / block_size )), 00:24:50.993 "block_size": 512 00:24:50.993 } 00:24:50.993 } 00:24:50.993 JSON 00:24:50.993 )") 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:50.993 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:24:50.994 { 00:24:50.994 "method": "bdev_ocf_create", 00:24:50.994 "params": { 00:24:50.994 "name": "MalCache$ocf", 00:24:50.994 "mode": "${ocf_mode[ocf]}", 00:24:50.994 "cache_bdev_name": "${ocf_cache[ocf]}", 00:24:50.994 "core_bdev_name": "${ocf_core[ocf]}" 00:24:50.994 } 00:24:50.994 } 00:24:50.994 JSON 00:24:50.994 )") 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # cat 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:24:50.994 { 00:24:50.994 "method": "bdev_ocf_create", 00:24:50.994 "params": { 00:24:50.994 "name": "MalCache$ocf", 00:24:50.994 "mode": "${ocf_mode[ocf]}", 00:24:50.994 "cache_bdev_name": "${ocf_cache[ocf]}", 00:24:50.994 "core_bdev_name": "${ocf_core[ocf]}" 00:24:50.994 } 00:24:50.994 } 00:24:50.994 JSON 00:24:50.994 )") 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # cat 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # jq . 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # IFS=, 00:24:50.994 13:56:22 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:24:50.994 "method": "bdev_malloc_create", 00:24:50.994 "params": { 00:24:50.994 "name": "Malloc0", 00:24:50.994 "num_blocks": 614400, 00:24:50.994 "block_size": 512 00:24:50.994 } 00:24:50.994 },{ 00:24:50.994 "method": "bdev_malloc_create", 00:24:50.994 "params": { 00:24:50.994 "name": "Malloc1", 00:24:50.994 "num_blocks": 614400, 00:24:50.994 "block_size": 512 00:24:50.994 } 00:24:50.994 },{ 00:24:50.994 "method": "bdev_malloc_create", 00:24:50.994 "params": { 00:24:50.994 "name": "Malloc2", 00:24:50.994 "num_blocks": 614400, 00:24:50.994 "block_size": 512 00:24:50.994 } 00:24:50.994 },{ 00:24:50.994 "method": "bdev_ocf_create", 00:24:50.994 "params": { 00:24:50.994 "name": "MalCache1", 00:24:50.994 "mode": "wt", 00:24:50.994 "cache_bdev_name": "Malloc0", 00:24:50.994 "core_bdev_name": "Malloc1" 00:24:50.994 } 00:24:50.994 },{ 00:24:50.994 "method": "bdev_ocf_create", 00:24:50.994 "params": { 00:24:50.994 "name": "MalCache2", 00:24:50.994 "mode": "pt", 00:24:50.994 "cache_bdev_name": "Malloc0", 00:24:50.994 "core_bdev_name": "Malloc2" 00:24:50.994 } 00:24:50.994 }' 00:24:50.994 [2024-12-05 13:56:22.217197] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:50.994 [2024-12-05 13:56:22.217273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3963448 ] 00:24:50.994 [2024-12-05 13:56:22.338605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.994 [2024-12-05 13:56:22.393433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.254 [2024-12-05 13:56:22.606060] 'OCF_Core' volume operations registered 00:24:51.254 [2024-12-05 13:56:22.606094] 'OCF_Cache' volume operations registered 00:24:51.254 [2024-12-05 13:56:22.610521] 'OCF Composite' volume operations registered 00:24:51.254 [2024-12-05 13:56:22.614967] 'SPDK_block_device' volume operations registered 00:24:51.513 [2024-12-05 13:56:22.864643] Inserting cache MalCache1 00:24:51.513 [2024-12-05 13:56:22.865152] MalCache1: Metadata initialized 00:24:51.513 [2024-12-05 13:56:22.865602] MalCache1: Successfully added 00:24:51.513 [2024-12-05 13:56:22.865621] MalCache1: Cache mode : wt 00:24:51.513 [2024-12-05 13:56:22.876444] MalCache1: Super block config offset : 0 kiB 00:24:51.513 [2024-12-05 13:56:22.876465] MalCache1: Super block config size : 2200 B 00:24:51.513 [2024-12-05 13:56:22.876472] MalCache1: Super block runtime offset : 128 kiB 00:24:51.513 [2024-12-05 13:56:22.876479] MalCache1: Super block runtime size : 4 B 00:24:51.513 [2024-12-05 13:56:22.876485] MalCache1: Reserved offset : 256 kiB 00:24:51.513 [2024-12-05 13:56:22.876492] MalCache1: Reserved size : 128 kiB 00:24:51.513 [2024-12-05 13:56:22.876498] MalCache1: Part config offset : 384 kiB 00:24:51.513 [2024-12-05 13:56:22.876505] MalCache1: Part config size : 48 kiB 00:24:51.513 [2024-12-05 13:56:22.876511] MalCache1: Part runtime offset : 640 kiB 00:24:51.513 [2024-12-05 13:56:22.876518] MalCache1: Part runtime size : 72 kiB 00:24:51.513 [2024-12-05 13:56:22.876524] MalCache1: Core config offset : 768 kiB 00:24:51.513 [2024-12-05 13:56:22.876530] MalCache1: Core config size : 512 kiB 00:24:51.513 [2024-12-05 13:56:22.876537] MalCache1: Core runtime offset : 1792 kiB 00:24:51.513 [2024-12-05 13:56:22.876543] MalCache1: Core runtime size : 1172 kiB 00:24:51.513 [2024-12-05 13:56:22.876549] MalCache1: Core UUID offset : 3072 kiB 00:24:51.513 [2024-12-05 13:56:22.876563] MalCache1: Core UUID size : 16384 kiB 00:24:51.513 [2024-12-05 13:56:22.876570] MalCache1: Cleaning offset : 35840 kiB 00:24:51.513 [2024-12-05 13:56:22.876576] MalCache1: Cleaning size : 788 kiB 00:24:51.513 [2024-12-05 13:56:22.876583] MalCache1: LRU list offset : 36736 kiB 00:24:51.513 [2024-12-05 13:56:22.876589] MalCache1: LRU list size : 592 kiB 00:24:51.513 [2024-12-05 13:56:22.876596] MalCache1: Collision offset : 37376 kiB 00:24:51.513 [2024-12-05 13:56:22.876602] MalCache1: Collision size : 788 kiB 00:24:51.513 [2024-12-05 13:56:22.876608] MalCache1: List info offset : 38272 kiB 00:24:51.513 [2024-12-05 13:56:22.876614] MalCache1: List info size : 592 kiB 00:24:51.513 [2024-12-05 13:56:22.876621] MalCache1: Hash offset : 38912 kiB 00:24:51.513 [2024-12-05 13:56:22.876627] MalCache1: Hash size : 68 kiB 00:24:51.513 [2024-12-05 13:56:22.876641] MalCache1: Cache line size: 4 kiB 00:24:51.513 [2024-12-05 13:56:22.876648] MalCache1: Metadata size on device: 39040 kiB 00:24:51.513 [2024-12-05 13:56:22.887282] MalCache1: Policy 'always' initialized successfully 00:24:51.773 [2024-12-05 13:56:23.099679] MalCache1: Done saving cache state! 00:24:51.773 [2024-12-05 13:56:23.131056] MalCache1: Cache attached 00:24:51.773 [2024-12-05 13:56:23.131151] MalCache1: Successfully attached 00:24:51.773 [2024-12-05 13:56:23.131407] MalCache1: Inserting core Malloc1 00:24:51.773 [2024-12-05 13:56:23.131432] MalCache1.Malloc1: Seqential cutoff init 00:24:51.773 [2024-12-05 13:56:23.162928] MalCache1.Malloc1: Successfully added 00:24:51.773 [2024-12-05 13:56:23.168907] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:24:51.773 [2024-12-05 13:56:23.169128] MalCache1: Inserting core Malloc2 00:24:51.773 [2024-12-05 13:56:23.169152] MalCache1.Malloc2: Seqential cutoff init 00:24:51.773 [2024-12-05 13:56:23.200693] MalCache1.Malloc2: Successfully added 00:24:51.773 Running I/O for 4 seconds... 00:24:54.088 67904.00 IOPS, 265.25 MiB/s [2024-12-05T12:56:26.550Z] 67872.00 IOPS, 265.12 MiB/s [2024-12-05T12:56:27.485Z] 67861.33 IOPS, 265.08 MiB/s [2024-12-05T12:56:27.485Z] 67840.00 IOPS, 265.00 MiB/s 00:24:55.959 Latency(us) 00:24:55.959 [2024-12-05T12:56:27.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.959 Job: MalCache1 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:24:55.959 MalCache1 : 4.01 33883.94 132.36 0.00 0.00 3770.17 687.42 7636.37 00:24:55.959 Job: MalCache2 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:24:55.959 MalCache2 : 4.01 33872.96 132.32 0.00 0.00 3770.26 669.61 7750.34 00:24:55.959 [2024-12-05T12:56:27.485Z] =================================================================================================================== 00:24:55.959 [2024-12-05T12:56:27.485Z] Total : 67756.90 264.68 0.00 0.00 3770.21 669.61 7750.34 00:24:55.959 [2024-12-05 13:56:27.241753] MalCache1: Flushing cache 00:24:55.959 [2024-12-05 13:56:27.241792] MalCache1: Flushing cache completed 00:24:55.959 [2024-12-05 13:56:27.243346] MalCache1: Stopping cache 00:24:55.959 [2024-12-05 13:56:27.430613] MalCache1: Done saving cache state! 00:24:55.959 [2024-12-05 13:56:27.444731] Cache MalCache1 successfully stopped 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w unmap 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@14 -- # gen_malloc_ocf_json 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@2 -- # local size=300 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@3 -- # local block_size=512 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@4 -- # local config 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:24:56.895 { 00:24:56.895 "method": "bdev_malloc_create", 00:24:56.895 "params": { 00:24:56.895 "name": "Malloc$malloc", 00:24:56.895 "num_blocks": $(( (size << 20) / block_size )), 00:24:56.895 "block_size": 512 00:24:56.895 } 00:24:56.895 } 00:24:56.895 JSON 00:24:56.895 )") 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:24:56.895 { 00:24:56.895 "method": "bdev_malloc_create", 00:24:56.895 "params": { 00:24:56.895 "name": "Malloc$malloc", 00:24:56.895 "num_blocks": $(( (size << 20) / block_size )), 00:24:56.895 "block_size": 512 00:24:56.895 } 00:24:56.895 } 00:24:56.895 JSON 00:24:56.895 )") 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:24:56.895 { 00:24:56.895 "method": "bdev_malloc_create", 00:24:56.895 "params": { 00:24:56.895 "name": "Malloc$malloc", 00:24:56.895 "num_blocks": $(( (size << 20) / block_size )), 00:24:56.895 "block_size": 512 00:24:56.895 } 00:24:56.895 } 00:24:56.895 JSON 00:24:56.895 )") 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:24:56.895 { 00:24:56.895 "method": "bdev_ocf_create", 00:24:56.895 "params": { 00:24:56.895 "name": "MalCache$ocf", 00:24:56.895 "mode": "${ocf_mode[ocf]}", 00:24:56.895 "cache_bdev_name": "${ocf_cache[ocf]}", 00:24:56.895 "core_bdev_name": "${ocf_core[ocf]}" 00:24:56.895 } 00:24:56.895 } 00:24:56.895 JSON 00:24:56.895 )") 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # cat 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:24:56.895 { 00:24:56.895 "method": "bdev_ocf_create", 00:24:56.895 "params": { 00:24:56.895 "name": "MalCache$ocf", 00:24:56.895 "mode": "${ocf_mode[ocf]}", 00:24:56.895 "cache_bdev_name": "${ocf_cache[ocf]}", 00:24:56.895 "core_bdev_name": "${ocf_core[ocf]}" 00:24:56.895 } 00:24:56.895 } 00:24:56.895 JSON 00:24:56.895 )") 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # cat 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # jq . 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # IFS=, 00:24:56.895 13:56:28 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:24:56.895 "method": "bdev_malloc_create", 00:24:56.895 "params": { 00:24:56.895 "name": "Malloc0", 00:24:56.895 "num_blocks": 614400, 00:24:56.895 "block_size": 512 00:24:56.895 } 00:24:56.895 },{ 00:24:56.895 "method": "bdev_malloc_create", 00:24:56.895 "params": { 00:24:56.895 "name": "Malloc1", 00:24:56.895 "num_blocks": 614400, 00:24:56.895 "block_size": 512 00:24:56.895 } 00:24:56.895 },{ 00:24:56.895 "method": "bdev_malloc_create", 00:24:56.895 "params": { 00:24:56.895 "name": "Malloc2", 00:24:56.895 "num_blocks": 614400, 00:24:56.895 "block_size": 512 00:24:56.895 } 00:24:56.895 },{ 00:24:56.895 "method": "bdev_ocf_create", 00:24:56.895 "params": { 00:24:56.895 "name": "MalCache1", 00:24:56.895 "mode": "wt", 00:24:56.895 "cache_bdev_name": "Malloc0", 00:24:56.895 "core_bdev_name": "Malloc1" 00:24:56.895 } 00:24:56.895 },{ 00:24:56.895 "method": "bdev_ocf_create", 00:24:56.895 "params": { 00:24:56.895 "name": "MalCache2", 00:24:56.895 "mode": "pt", 00:24:56.895 "cache_bdev_name": "Malloc0", 00:24:56.895 "core_bdev_name": "Malloc2" 00:24:56.895 } 00:24:56.895 }' 00:24:56.895 [2024-12-05 13:56:28.116993] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:24:56.895 [2024-12-05 13:56:28.117075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964320 ] 00:24:56.895 [2024-12-05 13:56:28.239285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.895 [2024-12-05 13:56:28.294228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.154 [2024-12-05 13:56:28.503532] 'OCF_Core' volume operations registered 00:24:57.154 [2024-12-05 13:56:28.503575] 'OCF_Cache' volume operations registered 00:24:57.154 [2024-12-05 13:56:28.507991] 'OCF Composite' volume operations registered 00:24:57.154 [2024-12-05 13:56:28.512462] 'SPDK_block_device' volume operations registered 00:24:57.417 [2024-12-05 13:56:28.764944] Inserting cache MalCache1 00:24:57.417 [2024-12-05 13:56:28.765440] MalCache1: Metadata initialized 00:24:57.417 [2024-12-05 13:56:28.765897] MalCache1: Successfully added 00:24:57.417 [2024-12-05 13:56:28.765915] MalCache1: Cache mode : wt 00:24:57.417 [2024-12-05 13:56:28.777012] MalCache1: Super block config offset : 0 kiB 00:24:57.417 [2024-12-05 13:56:28.777035] MalCache1: Super block config size : 2200 B 00:24:57.417 [2024-12-05 13:56:28.777043] MalCache1: Super block runtime offset : 128 kiB 00:24:57.417 [2024-12-05 13:56:28.777049] MalCache1: Super block runtime size : 4 B 00:24:57.417 [2024-12-05 13:56:28.777056] MalCache1: Reserved offset : 256 kiB 00:24:57.417 [2024-12-05 13:56:28.777062] MalCache1: Reserved size : 128 kiB 00:24:57.417 [2024-12-05 13:56:28.777069] MalCache1: Part config offset : 384 kiB 00:24:57.417 [2024-12-05 13:56:28.777075] MalCache1: Part config size : 48 kiB 00:24:57.417 [2024-12-05 13:56:28.777081] MalCache1: Part runtime offset : 640 kiB 00:24:57.417 [2024-12-05 13:56:28.777088] MalCache1: Part runtime size : 72 kiB 00:24:57.417 [2024-12-05 13:56:28.777094] MalCache1: Core config offset : 768 kiB 00:24:57.417 [2024-12-05 13:56:28.777100] MalCache1: Core config size : 512 kiB 00:24:57.417 [2024-12-05 13:56:28.777106] MalCache1: Core runtime offset : 1792 kiB 00:24:57.417 [2024-12-05 13:56:28.777113] MalCache1: Core runtime size : 1172 kiB 00:24:57.417 [2024-12-05 13:56:28.777119] MalCache1: Core UUID offset : 3072 kiB 00:24:57.417 [2024-12-05 13:56:28.777125] MalCache1: Core UUID size : 16384 kiB 00:24:57.417 [2024-12-05 13:56:28.777132] MalCache1: Cleaning offset : 35840 kiB 00:24:57.417 [2024-12-05 13:56:28.777138] MalCache1: Cleaning size : 788 kiB 00:24:57.417 [2024-12-05 13:56:28.777144] MalCache1: LRU list offset : 36736 kiB 00:24:57.417 [2024-12-05 13:56:28.777151] MalCache1: LRU list size : 592 kiB 00:24:57.417 [2024-12-05 13:56:28.777157] MalCache1: Collision offset : 37376 kiB 00:24:57.417 [2024-12-05 13:56:28.777163] MalCache1: Collision size : 788 kiB 00:24:57.417 [2024-12-05 13:56:28.777169] MalCache1: List info offset : 38272 kiB 00:24:57.417 [2024-12-05 13:56:28.777176] MalCache1: List info size : 592 kiB 00:24:57.417 [2024-12-05 13:56:28.777182] MalCache1: Hash offset : 38912 kiB 00:24:57.417 [2024-12-05 13:56:28.777189] MalCache1: Hash size : 68 kiB 00:24:57.417 [2024-12-05 13:56:28.777195] MalCache1: Cache line size: 4 kiB 00:24:57.417 [2024-12-05 13:56:28.777202] MalCache1: Metadata size on device: 39040 kiB 00:24:57.417 [2024-12-05 13:56:28.787895] MalCache1: Policy 'always' initialized successfully 00:24:57.740 [2024-12-05 13:56:28.999650] MalCache1: Done saving cache state! 00:24:57.740 [2024-12-05 13:56:29.030516] MalCache1: Cache attached 00:24:57.740 [2024-12-05 13:56:29.030613] MalCache1: Successfully attached 00:24:57.740 [2024-12-05 13:56:29.030906] MalCache1: Inserting core Malloc1 00:24:57.740 [2024-12-05 13:56:29.030934] MalCache1.Malloc1: Seqential cutoff init 00:24:57.740 [2024-12-05 13:56:29.061802] MalCache1.Malloc1: Successfully added 00:24:57.740 [2024-12-05 13:56:29.067852] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:24:57.740 [2024-12-05 13:56:29.068099] MalCache1: Inserting core Malloc2 00:24:57.740 [2024-12-05 13:56:29.068123] MalCache1.Malloc2: Seqential cutoff init 00:24:57.740 [2024-12-05 13:56:29.099069] MalCache1.Malloc2: Successfully added 00:24:57.740 Running I/O for 4 seconds... 00:24:59.749 52928.00 IOPS, 206.75 MiB/s [2024-12-05T12:56:32.209Z] 52768.00 IOPS, 206.12 MiB/s [2024-12-05T12:56:33.144Z] 52736.00 IOPS, 206.00 MiB/s [2024-12-05T12:56:33.144Z] 52704.00 IOPS, 205.88 MiB/s 00:25:01.618 Latency(us) 00:25:01.618 [2024-12-05T12:56:33.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.618 Job: MalCache1 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:25:01.618 MalCache1 : 4.01 26330.71 102.85 0.00 0.00 4854.54 2820.90 8662.15 00:25:01.618 Job: MalCache2 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:25:01.618 MalCache2 : 4.01 26323.40 102.83 0.00 0.00 4854.67 2450.48 8662.15 00:25:01.618 [2024-12-05T12:56:33.144Z] =================================================================================================================== 00:25:01.618 [2024-12-05T12:56:33.144Z] Total : 52654.11 205.68 0.00 0.00 4854.61 2450.48 8662.15 00:25:01.876 [2024-12-05 13:56:33.140088] MalCache1: Flushing cache 00:25:01.876 [2024-12-05 13:56:33.140126] MalCache1: Flushing cache completed 00:25:01.876 [2024-12-05 13:56:33.141814] MalCache1: Stopping cache 00:25:01.876 [2024-12-05 13:56:33.330663] MalCache1: Done saving cache state! 00:25:01.876 [2024-12-05 13:56:33.346114] Cache MalCache1 successfully stopped 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w write 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/bdevperf-iotypes.sh@15 -- # gen_malloc_ocf_json 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@2 -- # local size=300 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@3 -- # local block_size=512 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@4 -- # local config 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:25:02.814 { 00:25:02.814 "method": "bdev_malloc_create", 00:25:02.814 "params": { 00:25:02.814 "name": "Malloc$malloc", 00:25:02.814 "num_blocks": $(( (size << 20) / block_size )), 00:25:02.814 "block_size": 512 00:25:02.814 } 00:25:02.814 } 00:25:02.814 JSON 00:25:02.814 )") 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:25:02.814 { 00:25:02.814 "method": "bdev_malloc_create", 00:25:02.814 "params": { 00:25:02.814 "name": "Malloc$malloc", 00:25:02.814 "num_blocks": $(( (size << 20) / block_size )), 00:25:02.814 "block_size": 512 00:25:02.814 } 00:25:02.814 } 00:25:02.814 JSON 00:25:02.814 )") 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:25:02.814 { 00:25:02.814 "method": "bdev_malloc_create", 00:25:02.814 "params": { 00:25:02.814 "name": "Malloc$malloc", 00:25:02.814 "num_blocks": $(( (size << 20) / block_size )), 00:25:02.814 "block_size": 512 00:25:02.814 } 00:25:02.814 } 00:25:02.814 JSON 00:25:02.814 )") 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@21 -- # cat 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:25:02.814 { 00:25:02.814 "method": "bdev_ocf_create", 00:25:02.814 "params": { 00:25:02.814 "name": "MalCache$ocf", 00:25:02.814 "mode": "${ocf_mode[ocf]}", 00:25:02.814 "cache_bdev_name": "${ocf_cache[ocf]}", 00:25:02.814 "core_bdev_name": "${ocf_core[ocf]}" 00:25:02.814 } 00:25:02.814 } 00:25:02.814 JSON 00:25:02.814 )") 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # cat 00:25:02.814 [2024-12-05 13:56:34.053082] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:02.814 [2024-12-05 13:56:34.053141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965049 ] 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:25:02.814 { 00:25:02.814 "method": "bdev_ocf_create", 00:25:02.814 "params": { 00:25:02.814 "name": "MalCache$ocf", 00:25:02.814 "mode": "${ocf_mode[ocf]}", 00:25:02.814 "cache_bdev_name": "${ocf_cache[ocf]}", 00:25:02.814 "core_bdev_name": "${ocf_core[ocf]}" 00:25:02.814 } 00:25:02.814 } 00:25:02.814 JSON 00:25:02.814 )") 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@44 -- # cat 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # jq . 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # IFS=, 00:25:02.814 13:56:34 ocf.ocf_bdevperf_iotypes -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:25:02.814 "method": "bdev_malloc_create", 00:25:02.814 "params": { 00:25:02.814 "name": "Malloc0", 00:25:02.814 "num_blocks": 614400, 00:25:02.814 "block_size": 512 00:25:02.814 } 00:25:02.814 },{ 00:25:02.814 "method": "bdev_malloc_create", 00:25:02.814 "params": { 00:25:02.814 "name": "Malloc1", 00:25:02.814 "num_blocks": 614400, 00:25:02.814 "block_size": 512 00:25:02.814 } 00:25:02.814 },{ 00:25:02.814 "method": "bdev_malloc_create", 00:25:02.814 "params": { 00:25:02.814 "name": "Malloc2", 00:25:02.814 "num_blocks": 614400, 00:25:02.814 "block_size": 512 00:25:02.814 } 00:25:02.814 },{ 00:25:02.814 "method": "bdev_ocf_create", 00:25:02.814 "params": { 00:25:02.814 "name": "MalCache1", 00:25:02.814 "mode": "wt", 00:25:02.814 "cache_bdev_name": "Malloc0", 00:25:02.814 "core_bdev_name": "Malloc1" 00:25:02.814 } 00:25:02.814 },{ 00:25:02.814 "method": "bdev_ocf_create", 00:25:02.814 "params": { 00:25:02.814 "name": "MalCache2", 00:25:02.814 "mode": "pt", 00:25:02.814 "cache_bdev_name": "Malloc0", 00:25:02.814 "core_bdev_name": "Malloc2" 00:25:02.814 } 00:25:02.814 }' 00:25:02.814 [2024-12-05 13:56:34.157820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.814 [2024-12-05 13:56:34.210919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.073 [2024-12-05 13:56:34.395769] 'OCF_Core' volume operations registered 00:25:03.073 [2024-12-05 13:56:34.395806] 'OCF_Cache' volume operations registered 00:25:03.073 [2024-12-05 13:56:34.399790] 'OCF Composite' volume operations registered 00:25:03.073 [2024-12-05 13:56:34.403820] 'SPDK_block_device' volume operations registered 00:25:03.331 [2024-12-05 13:56:34.622412] Inserting cache MalCache1 00:25:03.331 [2024-12-05 13:56:34.622845] MalCache1: Metadata initialized 00:25:03.331 [2024-12-05 13:56:34.623294] MalCache1: Successfully added 00:25:03.331 [2024-12-05 13:56:34.623310] MalCache1: Cache mode : wt 00:25:03.331 [2024-12-05 13:56:34.633417] MalCache1: Super block config offset : 0 kiB 00:25:03.331 [2024-12-05 13:56:34.633440] MalCache1: Super block config size : 2200 B 00:25:03.331 [2024-12-05 13:56:34.633447] MalCache1: Super block runtime offset : 128 kiB 00:25:03.331 [2024-12-05 13:56:34.633454] MalCache1: Super block runtime size : 4 B 00:25:03.331 [2024-12-05 13:56:34.633461] MalCache1: Reserved offset : 256 kiB 00:25:03.331 [2024-12-05 13:56:34.633467] MalCache1: Reserved size : 128 kiB 00:25:03.331 [2024-12-05 13:56:34.633474] MalCache1: Part config offset : 384 kiB 00:25:03.331 [2024-12-05 13:56:34.633480] MalCache1: Part config size : 48 kiB 00:25:03.331 [2024-12-05 13:56:34.633487] MalCache1: Part runtime offset : 640 kiB 00:25:03.331 [2024-12-05 13:56:34.633493] MalCache1: Part runtime size : 72 kiB 00:25:03.331 [2024-12-05 13:56:34.633499] MalCache1: Core config offset : 768 kiB 00:25:03.331 [2024-12-05 13:56:34.633506] MalCache1: Core config size : 512 kiB 00:25:03.331 [2024-12-05 13:56:34.633512] MalCache1: Core runtime offset : 1792 kiB 00:25:03.331 [2024-12-05 13:56:34.633519] MalCache1: Core runtime size : 1172 kiB 00:25:03.331 [2024-12-05 13:56:34.633525] MalCache1: Core UUID offset : 3072 kiB 00:25:03.331 [2024-12-05 13:56:34.633531] MalCache1: Core UUID size : 16384 kiB 00:25:03.331 [2024-12-05 13:56:34.633538] MalCache1: Cleaning offset : 35840 kiB 00:25:03.331 [2024-12-05 13:56:34.633544] MalCache1: Cleaning size : 788 kiB 00:25:03.331 [2024-12-05 13:56:34.633551] MalCache1: LRU list offset : 36736 kiB 00:25:03.331 [2024-12-05 13:56:34.633557] MalCache1: LRU list size : 592 kiB 00:25:03.331 [2024-12-05 13:56:34.633570] MalCache1: Collision offset : 37376 kiB 00:25:03.331 [2024-12-05 13:56:34.633576] MalCache1: Collision size : 788 kiB 00:25:03.331 [2024-12-05 13:56:34.633583] MalCache1: List info offset : 38272 kiB 00:25:03.331 [2024-12-05 13:56:34.633589] MalCache1: List info size : 592 kiB 00:25:03.331 [2024-12-05 13:56:34.633595] MalCache1: Hash offset : 38912 kiB 00:25:03.331 [2024-12-05 13:56:34.633602] MalCache1: Hash size : 68 kiB 00:25:03.331 [2024-12-05 13:56:34.633609] MalCache1: Cache line size: 4 kiB 00:25:03.331 [2024-12-05 13:56:34.633615] MalCache1: Metadata size on device: 39040 kiB 00:25:03.331 [2024-12-05 13:56:34.643457] MalCache1: Policy 'always' initialized successfully 00:25:03.590 [2024-12-05 13:56:34.855739] MalCache1: Done saving cache state! 00:25:03.590 [2024-12-05 13:56:34.887571] MalCache1: Cache attached 00:25:03.590 [2024-12-05 13:56:34.887666] MalCache1: Successfully attached 00:25:03.590 [2024-12-05 13:56:34.887951] MalCache1: Inserting core Malloc1 00:25:03.590 [2024-12-05 13:56:34.887977] MalCache1.Malloc1: Seqential cutoff init 00:25:03.590 [2024-12-05 13:56:34.919367] MalCache1.Malloc1: Successfully added 00:25:03.590 [2024-12-05 13:56:34.925253] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:25:03.590 [2024-12-05 13:56:34.925489] MalCache1: Inserting core Malloc2 00:25:03.590 [2024-12-05 13:56:34.925512] MalCache1.Malloc2: Seqential cutoff init 00:25:03.590 [2024-12-05 13:56:34.957235] MalCache1.Malloc2: Successfully added 00:25:03.590 Running I/O for 4 seconds... 00:25:05.477 27776.00 IOPS, 108.50 MiB/s [2024-12-05T12:56:38.377Z] 27648.00 IOPS, 108.00 MiB/s [2024-12-05T12:56:39.315Z] 30784.00 IOPS, 120.25 MiB/s [2024-12-05T12:56:39.315Z] 34032.00 IOPS, 132.94 MiB/s 00:25:07.789 Latency(us) 00:25:07.789 [2024-12-05T12:56:39.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.789 Job: MalCache1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:07.789 MalCache1 : 4.01 17012.05 66.45 0.00 0.00 7515.25 2863.64 13221.18 00:25:07.789 Job: MalCache2 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:07.789 MalCache2 : 4.01 17005.72 66.43 0.00 0.00 7515.29 2578.70 13164.19 00:25:07.789 [2024-12-05T12:56:39.315Z] =================================================================================================================== 00:25:07.789 [2024-12-05T12:56:39.315Z] Total : 34017.76 132.88 0.00 0.00 7515.27 2578.70 13221.18 00:25:07.789 [2024-12-05 13:56:38.999425] MalCache1: Flushing cache 00:25:07.789 [2024-12-05 13:56:38.999458] MalCache1: Flushing cache completed 00:25:07.789 [2024-12-05 13:56:39.000386] MalCache1: Stopping cache 00:25:07.790 [2024-12-05 13:56:39.188542] MalCache1: Done saving cache state! 00:25:07.790 [2024-12-05 13:56:39.203931] Cache MalCache1 successfully stopped 00:25:08.358 00:25:08.358 real 0m17.824s 00:25:08.358 user 0m16.295s 00:25:08.358 sys 0m1.610s 00:25:08.358 13:56:39 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.358 13:56:39 ocf.ocf_bdevperf_iotypes -- common/autotest_common.sh@10 -- # set +x 00:25:08.358 ************************************ 00:25:08.358 END TEST ocf_bdevperf_iotypes 00:25:08.358 ************************************ 00:25:08.358 13:56:39 ocf -- ocf/ocf.sh@13 -- # run_test ocf_stats /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh 00:25:08.358 13:56:39 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:08.358 13:56:39 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.358 13:56:39 ocf -- common/autotest_common.sh@10 -- # set +x 00:25:08.617 ************************************ 00:25:08.617 START TEST ocf_stats 00:25:08.617 ************************************ 00:25:08.617 13:56:39 ocf.ocf_stats -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh 00:25:08.617 13:56:39 ocf.ocf_stats -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:08.617 13:56:39 ocf.ocf_stats -- common/autotest_common.sh@1711 -- # lcov --version 00:25:08.617 13:56:39 ocf.ocf_stats -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:08.617 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@344 -- # case "$op" in 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@345 -- # : 1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@365 -- # decimal 1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@353 -- # local d=1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@355 -- # echo 1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@366 -- # decimal 2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@353 -- # local d=2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@355 -- # echo 2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.617 13:56:40 ocf.ocf_stats -- scripts/common.sh@368 -- # return 0 00:25:08.617 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.617 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:08.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.617 --rc genhtml_branch_coverage=1 00:25:08.617 --rc genhtml_function_coverage=1 00:25:08.617 --rc genhtml_legend=1 00:25:08.617 --rc geninfo_all_blocks=1 00:25:08.617 --rc geninfo_unexecuted_blocks=1 00:25:08.617 00:25:08.617 ' 00:25:08.617 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:08.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.617 --rc genhtml_branch_coverage=1 00:25:08.617 --rc genhtml_function_coverage=1 00:25:08.617 --rc genhtml_legend=1 00:25:08.617 --rc geninfo_all_blocks=1 00:25:08.617 --rc geninfo_unexecuted_blocks=1 00:25:08.617 00:25:08.617 ' 00:25:08.617 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:08.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.617 --rc genhtml_branch_coverage=1 00:25:08.617 --rc genhtml_function_coverage=1 00:25:08.617 --rc genhtml_legend=1 00:25:08.617 --rc geninfo_all_blocks=1 00:25:08.617 --rc geninfo_unexecuted_blocks=1 00:25:08.617 00:25:08.617 ' 00:25:08.617 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:08.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.617 --rc genhtml_branch_coverage=1 00:25:08.617 --rc genhtml_function_coverage=1 00:25:08.617 --rc genhtml_legend=1 00:25:08.617 --rc geninfo_all_blocks=1 00:25:08.617 --rc geninfo_unexecuted_blocks=1 00:25:08.617 00:25:08.617 ' 00:25:08.617 13:56:40 ocf.ocf_stats -- integrity/stats.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/stats.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/stats.sh@14 -- # bdev_perf_pid=3965900 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/stats.sh@15 -- # waitforlisten 3965900 00:25:08.618 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@835 -- # '[' -z 3965900 ']' 00:25:08.618 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/stats.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock 00:25:08.618 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/stats.sh@13 -- # gen_malloc_ocf_json 00:25:08.618 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.618 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.618 13:56:40 ocf.ocf_stats -- common/autotest_common.sh@10 -- # set +x 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@2 -- # local size=300 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@3 -- # local block_size=512 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@4 -- # local config 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:25:08.618 { 00:25:08.618 "method": "bdev_malloc_create", 00:25:08.618 "params": { 00:25:08.618 "name": "Malloc$malloc", 00:25:08.618 "num_blocks": $(( (size << 20) / block_size )), 00:25:08.618 "block_size": 512 00:25:08.618 } 00:25:08.618 } 00:25:08.618 JSON 00:25:08.618 )") 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@21 -- # cat 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:25:08.618 { 00:25:08.618 "method": "bdev_malloc_create", 00:25:08.618 "params": { 00:25:08.618 "name": "Malloc$malloc", 00:25:08.618 "num_blocks": $(( (size << 20) / block_size )), 00:25:08.618 "block_size": 512 00:25:08.618 } 00:25:08.618 } 00:25:08.618 JSON 00:25:08.618 )") 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@21 -- # cat 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:25:08.618 { 00:25:08.618 "method": "bdev_malloc_create", 00:25:08.618 "params": { 00:25:08.618 "name": "Malloc$malloc", 00:25:08.618 "num_blocks": $(( (size << 20) / block_size )), 00:25:08.618 "block_size": 512 00:25:08.618 } 00:25:08.618 } 00:25:08.618 JSON 00:25:08.618 )") 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@21 -- # cat 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:25:08.618 { 00:25:08.618 "method": "bdev_ocf_create", 00:25:08.618 "params": { 00:25:08.618 "name": "MalCache$ocf", 00:25:08.618 "mode": "${ocf_mode[ocf]}", 00:25:08.618 "cache_bdev_name": "${ocf_cache[ocf]}", 00:25:08.618 "core_bdev_name": "${ocf_core[ocf]}" 00:25:08.618 } 00:25:08.618 } 00:25:08.618 JSON 00:25:08.618 )") 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@44 -- # cat 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:25:08.618 { 00:25:08.618 "method": "bdev_ocf_create", 00:25:08.618 "params": { 00:25:08.618 "name": "MalCache$ocf", 00:25:08.618 "mode": "${ocf_mode[ocf]}", 00:25:08.618 "cache_bdev_name": "${ocf_cache[ocf]}", 00:25:08.618 "core_bdev_name": "${ocf_core[ocf]}" 00:25:08.618 } 00:25:08.618 } 00:25:08.618 JSON 00:25:08.618 )") 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@44 -- # cat 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@47 -- # jq . 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@47 -- # IFS=, 00:25:08.618 13:56:40 ocf.ocf_stats -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:25:08.618 "method": "bdev_malloc_create", 00:25:08.618 "params": { 00:25:08.618 "name": "Malloc0", 00:25:08.618 "num_blocks": 614400, 00:25:08.618 "block_size": 512 00:25:08.618 } 00:25:08.618 },{ 00:25:08.618 "method": "bdev_malloc_create", 00:25:08.618 "params": { 00:25:08.618 "name": "Malloc1", 00:25:08.618 "num_blocks": 614400, 00:25:08.618 "block_size": 512 00:25:08.618 } 00:25:08.618 },{ 00:25:08.618 "method": "bdev_malloc_create", 00:25:08.618 "params": { 00:25:08.618 "name": "Malloc2", 00:25:08.618 "num_blocks": 614400, 00:25:08.618 "block_size": 512 00:25:08.618 } 00:25:08.618 },{ 00:25:08.618 "method": "bdev_ocf_create", 00:25:08.618 "params": { 00:25:08.618 "name": "MalCache1", 00:25:08.618 "mode": "wt", 00:25:08.618 "cache_bdev_name": "Malloc0", 00:25:08.618 "core_bdev_name": "Malloc1" 00:25:08.618 } 00:25:08.618 },{ 00:25:08.618 "method": "bdev_ocf_create", 00:25:08.618 "params": { 00:25:08.618 "name": "MalCache2", 00:25:08.618 "mode": "pt", 00:25:08.618 "cache_bdev_name": "Malloc0", 00:25:08.618 "core_bdev_name": "Malloc2" 00:25:08.618 } 00:25:08.618 }' 00:25:08.618 [2024-12-05 13:56:40.134774] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:08.618 [2024-12-05 13:56:40.134849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965900 ] 00:25:08.877 [2024-12-05 13:56:40.257484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.877 [2024-12-05 13:56:40.313728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.136 [2024-12-05 13:56:40.523268] 'OCF_Core' volume operations registered 00:25:09.136 [2024-12-05 13:56:40.523310] 'OCF_Cache' volume operations registered 00:25:09.136 [2024-12-05 13:56:40.527725] 'OCF Composite' volume operations registered 00:25:09.136 [2024-12-05 13:56:40.532165] 'SPDK_block_device' volume operations registered 00:25:09.395 [2024-12-05 13:56:40.793488] Inserting cache MalCache1 00:25:09.395 [2024-12-05 13:56:40.793991] MalCache1: Metadata initialized 00:25:09.395 [2024-12-05 13:56:40.794441] MalCache1: Successfully added 00:25:09.395 [2024-12-05 13:56:40.794459] MalCache1: Cache mode : wt 00:25:09.395 [2024-12-05 13:56:40.805409] MalCache1: Super block config offset : 0 kiB 00:25:09.395 [2024-12-05 13:56:40.805433] MalCache1: Super block config size : 2200 B 00:25:09.395 [2024-12-05 13:56:40.805440] MalCache1: Super block runtime offset : 128 kiB 00:25:09.395 [2024-12-05 13:56:40.805447] MalCache1: Super block runtime size : 4 B 00:25:09.395 [2024-12-05 13:56:40.805453] MalCache1: Reserved offset : 256 kiB 00:25:09.395 [2024-12-05 13:56:40.805459] MalCache1: Reserved size : 128 kiB 00:25:09.395 [2024-12-05 13:56:40.805466] MalCache1: Part config offset : 384 kiB 00:25:09.395 [2024-12-05 13:56:40.805472] MalCache1: Part config size : 48 kiB 00:25:09.395 [2024-12-05 13:56:40.805479] MalCache1: Part runtime offset : 640 kiB 00:25:09.395 [2024-12-05 13:56:40.805485] MalCache1: Part runtime size : 72 kiB 00:25:09.395 [2024-12-05 13:56:40.805491] MalCache1: Core config offset : 768 kiB 00:25:09.395 [2024-12-05 13:56:40.805497] MalCache1: Core config size : 512 kiB 00:25:09.395 [2024-12-05 13:56:40.805504] MalCache1: Core runtime offset : 1792 kiB 00:25:09.395 [2024-12-05 13:56:40.805510] MalCache1: Core runtime size : 1172 kiB 00:25:09.395 [2024-12-05 13:56:40.805516] MalCache1: Core UUID offset : 3072 kiB 00:25:09.395 [2024-12-05 13:56:40.805522] MalCache1: Core UUID size : 16384 kiB 00:25:09.395 [2024-12-05 13:56:40.805529] MalCache1: Cleaning offset : 35840 kiB 00:25:09.395 [2024-12-05 13:56:40.805535] MalCache1: Cleaning size : 788 kiB 00:25:09.395 [2024-12-05 13:56:40.805541] MalCache1: LRU list offset : 36736 kiB 00:25:09.395 [2024-12-05 13:56:40.805547] MalCache1: LRU list size : 592 kiB 00:25:09.395 [2024-12-05 13:56:40.805554] MalCache1: Collision offset : 37376 kiB 00:25:09.395 [2024-12-05 13:56:40.805560] MalCache1: Collision size : 788 kiB 00:25:09.395 [2024-12-05 13:56:40.805566] MalCache1: List info offset : 38272 kiB 00:25:09.395 [2024-12-05 13:56:40.805572] MalCache1: List info size : 592 kiB 00:25:09.395 [2024-12-05 13:56:40.805579] MalCache1: Hash offset : 38912 kiB 00:25:09.395 [2024-12-05 13:56:40.805585] MalCache1: Hash size : 68 kiB 00:25:09.395 [2024-12-05 13:56:40.805592] MalCache1: Cache line size: 4 kiB 00:25:09.395 [2024-12-05 13:56:40.805598] MalCache1: Metadata size on device: 39040 kiB 00:25:09.395 [2024-12-05 13:56:40.816254] MalCache1: Policy 'always' initialized successfully 00:25:09.654 [2024-12-05 13:56:41.029214] MalCache1: Done saving cache state! 00:25:09.654 [2024-12-05 13:56:41.060546] MalCache1: Cache attached 00:25:09.654 [2024-12-05 13:56:41.060649] MalCache1: Successfully attached 00:25:09.654 [2024-12-05 13:56:41.060934] MalCache1: Inserting core Malloc1 00:25:09.654 [2024-12-05 13:56:41.060962] MalCache1.Malloc1: Seqential cutoff init 00:25:09.654 [2024-12-05 13:56:41.092116] MalCache1.Malloc1: Successfully added 00:25:09.654 [2024-12-05 13:56:41.098179] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:25:09.654 [2024-12-05 13:56:41.098436] MalCache1: Inserting core Malloc2 00:25:09.654 [2024-12-05 13:56:41.098462] MalCache1.Malloc2: Seqential cutoff init 00:25:09.654 [2024-12-05 13:56:41.129893] MalCache1.Malloc2: Successfully added 00:25:09.654 Running I/O for 120 seconds... 00:25:09.654 13:56:41 ocf.ocf_stats -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.654 13:56:41 ocf.ocf_stats -- common/autotest_common.sh@868 -- # return 0 00:25:09.654 13:56:41 ocf.ocf_stats -- integrity/stats.sh@16 -- # sleep 1 00:25:11.034 27968.00 IOPS, 109.25 MiB/s [2024-12-05T12:56:42.560Z] 13:56:42 ocf.ocf_stats -- integrity/stats.sh@17 -- # rpc_cmd bdev_ocf_get_stats MalCache1 00:25:11.034 13:56:42 ocf.ocf_stats -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.034 13:56:42 ocf.ocf_stats -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 { 00:25:11.034 "usage": { 00:25:11.034 "occupancy": { 00:25:11.034 "count": 14432, 00:25:11.034 "percentage": "21.52", 00:25:11.034 "units": "4KiB blocks" 00:25:11.034 }, 00:25:11.034 "free": { 00:25:11.034 "count": 38176, 00:25:11.034 "percentage": "56.94", 00:25:11.034 "units": "4KiB blocks" 00:25:11.034 }, 00:25:11.034 "clean": { 00:25:11.034 "count": 14432, 00:25:11.034 "percentage": "100.0", 00:25:11.034 "units": "4KiB blocks" 00:25:11.034 }, 00:25:11.034 "dirty": { 00:25:11.034 "count": 0, 00:25:11.034 "percentage": "0.0", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 } 00:25:11.035 }, 00:25:11.035 "requests": { 00:25:11.035 "rd_hits": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "rd_partial_misses": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "rd_full_misses": { 00:25:11.035 "count": 4, 00:25:11.035 "percentage": "0.2", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "rd_total": { 00:25:11.035 "count": 4, 00:25:11.035 "percentage": "0.2", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "wr_hits": { 00:25:11.035 "count": 8, 00:25:11.035 "percentage": "0.5", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "wr_partial_misses": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "wr_full_misses": { 00:25:11.035 "count": 14424, 00:25:11.035 "percentage": "99.91", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "wr_total": { 00:25:11.035 "count": 14432, 00:25:11.035 "percentage": "99.97", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "rd_pt": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "wr_pt": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "serviced": { 00:25:11.035 "count": 14436, 00:25:11.035 "percentage": "100.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "total": { 00:25:11.035 "count": 14436, 00:25:11.035 "percentage": "100.0", 00:25:11.035 "units": "Requests" 00:25:11.035 } 00:25:11.035 }, 00:25:11.035 "blocks": { 00:25:11.035 "core_volume_rd": { 00:25:11.035 "count": 9, 00:25:11.035 "percentage": "0.6", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "core_volume_wr": { 00:25:11.035 "count": 14432, 00:25:11.035 "percentage": "99.93", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "core_volume_total": { 00:25:11.035 "count": 14441, 00:25:11.035 "percentage": "100.0", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "cache_volume_rd": { 00:25:11.035 "count": 2, 00:25:11.035 "percentage": "0.1", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "cache_volume_wr": { 00:25:11.035 "count": 14441, 00:25:11.035 "percentage": "99.98", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "cache_volume_total": { 00:25:11.035 "count": 14443, 00:25:11.035 "percentage": "100.0", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "volume_rd": { 00:25:11.035 "count": 11, 00:25:11.035 "percentage": "0.7", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "volume_wr": { 00:25:11.035 "count": 14432, 00:25:11.035 "percentage": "99.92", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 }, 00:25:11.035 "volume_total": { 00:25:11.035 "count": 14443, 00:25:11.035 "percentage": "100.0", 00:25:11.035 "units": "4KiB blocks" 00:25:11.035 } 00:25:11.035 }, 00:25:11.035 "errors": { 00:25:11.035 "core_volume_rd": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "core_volume_wr": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "core_volume_total": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "cache_volume_rd": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "cache_volume_wr": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "cache_volume_total": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 }, 00:25:11.035 "total": { 00:25:11.035 "count": 0, 00:25:11.035 "percentage": "0.0", 00:25:11.035 "units": "Requests" 00:25:11.035 } 00:25:11.035 } 00:25:11.035 } 00:25:11.035 13:56:42 ocf.ocf_stats -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.035 13:56:42 ocf.ocf_stats -- integrity/stats.sh@18 -- # kill -9 3965900 00:25:11.035 13:56:42 ocf.ocf_stats -- integrity/stats.sh@19 -- # wait 3965900 00:25:11.035 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh: line 19: 3965900 Killed $bdevperf --json <(gen_malloc_ocf_json) -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock 00:25:11.035 13:56:42 ocf.ocf_stats -- integrity/stats.sh@19 -- # true 00:25:11.035 00:25:11.035 real 0m2.330s 00:25:11.035 user 0m1.934s 00:25:11.035 sys 0m0.666s 00:25:11.035 13:56:42 ocf.ocf_stats -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.035 13:56:42 ocf.ocf_stats -- common/autotest_common.sh@10 -- # set +x 00:25:11.035 ************************************ 00:25:11.035 END TEST ocf_stats 00:25:11.035 ************************************ 00:25:11.035 13:56:42 ocf -- ocf/ocf.sh@14 -- # run_test ocf_flush /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh 00:25:11.035 13:56:42 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:11.035 13:56:42 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.035 13:56:42 ocf -- common/autotest_common.sh@10 -- # set +x 00:25:11.035 ************************************ 00:25:11.035 START TEST ocf_flush 00:25:11.035 ************************************ 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@344 -- # case "$op" in 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@345 -- # : 1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@365 -- # decimal 1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@353 -- # local d=1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@355 -- # echo 1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@366 -- # decimal 2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@353 -- # local d=2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@355 -- # echo 2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.035 13:56:42 ocf.ocf_flush -- scripts/common.sh@368 -- # return 0 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.035 --rc genhtml_branch_coverage=1 00:25:11.035 --rc genhtml_function_coverage=1 00:25:11.035 --rc genhtml_legend=1 00:25:11.035 --rc geninfo_all_blocks=1 00:25:11.035 --rc geninfo_unexecuted_blocks=1 00:25:11.035 00:25:11.035 ' 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.035 --rc genhtml_branch_coverage=1 00:25:11.035 --rc genhtml_function_coverage=1 00:25:11.035 --rc genhtml_legend=1 00:25:11.035 --rc geninfo_all_blocks=1 00:25:11.035 --rc geninfo_unexecuted_blocks=1 00:25:11.035 00:25:11.035 ' 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.035 --rc genhtml_branch_coverage=1 00:25:11.035 --rc genhtml_function_coverage=1 00:25:11.035 --rc genhtml_legend=1 00:25:11.035 --rc geninfo_all_blocks=1 00:25:11.035 --rc geninfo_unexecuted_blocks=1 00:25:11.035 00:25:11.035 ' 00:25:11.035 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.035 --rc genhtml_branch_coverage=1 00:25:11.036 --rc genhtml_function_coverage=1 00:25:11.036 --rc genhtml_legend=1 00:25:11.036 --rc geninfo_all_blocks=1 00:25:11.036 --rc geninfo_unexecuted_blocks=1 00:25:11.036 00:25:11.036 ' 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@11 -- # rpc_py='/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock' 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@73 -- # bdevperf_pid=3966205 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@74 -- # trap 'killprocess $bdevperf_pid' SIGINT SIGTERM EXIT 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@75 -- # waitforlisten 3966205 00:25:11.036 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@835 -- # '[' -z 3966205 ']' 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock 00:25:11.036 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.036 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@72 -- # bdevperf_config 00:25:11.036 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@19 -- # local config 00:25:11.036 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.036 13:56:42 ocf.ocf_flush -- common/autotest_common.sh@10 -- # set +x 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@50 -- # cat 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@50 -- # config='{ 00:25:11.036 "method": "bdev_malloc_create", 00:25:11.036 "params": { 00:25:11.036 "name": "Malloc0", 00:25:11.036 "num_blocks": 102400, 00:25:11.036 "block_size": 512 00:25:11.036 } 00:25:11.036 }, 00:25:11.036 { 00:25:11.036 "method": "bdev_malloc_create", 00:25:11.036 "params": { 00:25:11.036 "name": "Malloc1", 00:25:11.036 "num_blocks": 1024000, 00:25:11.036 "block_size": 512 00:25:11.036 } 00:25:11.036 }, 00:25:11.036 { 00:25:11.036 "method": "bdev_ocf_create", 00:25:11.036 "params": { 00:25:11.036 "name": "MalCache0", 00:25:11.036 "mode": "wb", 00:25:11.036 "cache_line_size": 4, 00:25:11.036 "cache_bdev_name": "Malloc0", 00:25:11.036 "core_bdev_name": "Malloc1" 00:25:11.036 } 00:25:11.036 }' 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@52 -- # jq . 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@53 -- # IFS=, 00:25:11.036 13:56:42 ocf.ocf_flush -- integrity/flush.sh@54 -- # printf '%s\n' '{ 00:25:11.036 "method": "bdev_malloc_create", 00:25:11.036 "params": { 00:25:11.036 "name": "Malloc0", 00:25:11.036 "num_blocks": 102400, 00:25:11.036 "block_size": 512 00:25:11.036 } 00:25:11.036 }, 00:25:11.036 { 00:25:11.036 "method": "bdev_malloc_create", 00:25:11.036 "params": { 00:25:11.036 "name": "Malloc1", 00:25:11.036 "num_blocks": 1024000, 00:25:11.036 "block_size": 512 00:25:11.036 } 00:25:11.036 }, 00:25:11.036 { 00:25:11.036 "method": "bdev_ocf_create", 00:25:11.036 "params": { 00:25:11.036 "name": "MalCache0", 00:25:11.036 "mode": "wb", 00:25:11.036 "cache_line_size": 4, 00:25:11.036 "cache_bdev_name": "Malloc0", 00:25:11.036 "core_bdev_name": "Malloc1" 00:25:11.036 } 00:25:11.036 }' 00:25:11.036 [2024-12-05 13:56:42.551819] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:11.036 [2024-12-05 13:56:42.551895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966205 ] 00:25:11.295 [2024-12-05 13:56:42.674755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.295 [2024-12-05 13:56:42.727966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.555 [2024-12-05 13:56:42.930484] 'OCF_Core' volume operations registered 00:25:11.555 [2024-12-05 13:56:42.930527] 'OCF_Cache' volume operations registered 00:25:11.555 [2024-12-05 13:56:42.934955] 'OCF Composite' volume operations registered 00:25:11.555 [2024-12-05 13:56:42.939385] 'SPDK_block_device' volume operations registered 00:25:11.814 [2024-12-05 13:56:43.099980] Inserting cache MalCache0 00:25:11.814 [2024-12-05 13:56:43.100470] MalCache0: Metadata initialized 00:25:11.814 [2024-12-05 13:56:43.100928] MalCache0: Successfully added 00:25:11.814 [2024-12-05 13:56:43.100946] MalCache0: Cache mode : wb 00:25:11.814 [2024-12-05 13:56:43.111598] MalCache0: Super block config offset : 0 kiB 00:25:11.814 [2024-12-05 13:56:43.111619] MalCache0: Super block config size : 2200 B 00:25:11.814 [2024-12-05 13:56:43.111626] MalCache0: Super block runtime offset : 128 kiB 00:25:11.814 [2024-12-05 13:56:43.111640] MalCache0: Super block runtime size : 4 B 00:25:11.814 [2024-12-05 13:56:43.111648] MalCache0: Reserved offset : 256 kiB 00:25:11.814 [2024-12-05 13:56:43.111654] MalCache0: Reserved size : 128 kiB 00:25:11.814 [2024-12-05 13:56:43.111667] MalCache0: Part config offset : 384 kiB 00:25:11.814 [2024-12-05 13:56:43.111673] MalCache0: Part config size : 48 kiB 00:25:11.814 [2024-12-05 13:56:43.111680] MalCache0: Part runtime offset : 640 kiB 00:25:11.814 [2024-12-05 13:56:43.111686] MalCache0: Part runtime size : 72 kiB 00:25:11.814 [2024-12-05 13:56:43.111693] MalCache0: Core config offset : 768 kiB 00:25:11.814 [2024-12-05 13:56:43.111699] MalCache0: Core config size : 512 kiB 00:25:11.815 [2024-12-05 13:56:43.111705] MalCache0: Core runtime offset : 1792 kiB 00:25:11.815 [2024-12-05 13:56:43.111711] MalCache0: Core runtime size : 1172 kiB 00:25:11.815 [2024-12-05 13:56:43.111718] MalCache0: Core UUID offset : 3072 kiB 00:25:11.815 [2024-12-05 13:56:43.111724] MalCache0: Core UUID size : 16384 kiB 00:25:11.815 [2024-12-05 13:56:43.111730] MalCache0: Cleaning offset : 35840 kiB 00:25:11.815 [2024-12-05 13:56:43.111736] MalCache0: Cleaning size : 44 kiB 00:25:11.815 [2024-12-05 13:56:43.111743] MalCache0: LRU list offset : 35968 kiB 00:25:11.815 [2024-12-05 13:56:43.111749] MalCache0: LRU list size : 36 kiB 00:25:11.815 [2024-12-05 13:56:43.111755] MalCache0: Collision offset : 36096 kiB 00:25:11.815 [2024-12-05 13:56:43.111761] MalCache0: Collision size : 44 kiB 00:25:11.815 [2024-12-05 13:56:43.111768] MalCache0: List info offset : 36224 kiB 00:25:11.815 [2024-12-05 13:56:43.111774] MalCache0: List info size : 36 kiB 00:25:11.815 [2024-12-05 13:56:43.111780] MalCache0: Hash offset : 36352 kiB 00:25:11.815 [2024-12-05 13:56:43.111786] MalCache0: Hash size : 4 kiB 00:25:11.815 [2024-12-05 13:56:43.111793] MalCache0: Cache line size: 4 kiB 00:25:11.815 [2024-12-05 13:56:43.111800] MalCache0: Metadata size on device: 36480 kiB 00:25:11.815 [2024-12-05 13:56:43.122342] MalCache0: Policy 'always' initialized successfully 00:25:11.815 [2024-12-05 13:56:43.211025] MalCache0: Done saving cache state! 00:25:11.815 [2024-12-05 13:56:43.241745] MalCache0: Cache attached 00:25:11.815 [2024-12-05 13:56:43.241841] MalCache0: Successfully attached 00:25:11.815 [2024-12-05 13:56:43.242117] MalCache0: Inserting core Malloc1 00:25:11.815 [2024-12-05 13:56:43.242141] MalCache0.Malloc1: Seqential cutoff init 00:25:11.815 [2024-12-05 13:56:43.272664] MalCache0.Malloc1: Successfully added 00:25:11.815 Running I/O for 120 seconds... 00:25:11.815 13:56:43 ocf.ocf_flush -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.815 13:56:43 ocf.ocf_flush -- common/autotest_common.sh@868 -- # return 0 00:25:11.815 13:56:43 ocf.ocf_flush -- integrity/flush.sh@76 -- # sleep 5 00:25:14.136 40079.00 IOPS, 156.56 MiB/s [2024-12-05T12:56:46.598Z] 41927.50 IOPS, 163.78 MiB/s [2024-12-05T12:56:47.538Z] 42554.33 IOPS, 166.23 MiB/s [2024-12-05T12:56:48.475Z] 42573.00 IOPS, 166.30 MiB/s [2024-12-05T12:56:48.475Z] 42804.00 IOPS, 167.20 MiB/s [2024-12-05T12:56:48.475Z] 13:56:48 ocf.ocf_flush -- integrity/flush.sh@78 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_start MalCache0 00:25:17.208 [2024-12-05 13:56:48.488457] MalCache0: Flushing cache 00:25:17.208 13:56:48 ocf.ocf_flush -- integrity/flush.sh@79 -- # sleep 1 00:25:17.208 [2024-12-05 13:56:48.594467] MalCache0: Flushing cache completed 00:25:18.145 42307.33 IOPS, 165.26 MiB/s [2024-12-05T12:56:49.671Z] 13:56:49 ocf.ocf_flush -- integrity/flush.sh@81 -- # check_flush_in_progress 00:25:18.145 13:56:49 ocf.ocf_flush -- integrity/flush.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0 00:25:18.145 13:56:49 ocf.ocf_flush -- integrity/flush.sh@15 -- # jq -e .in_progress 00:25:18.404 13:56:49 ocf.ocf_flush -- integrity/flush.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0 00:25:18.404 13:56:49 ocf.ocf_flush -- integrity/flush.sh@84 -- # jq -e '.status == 0' 00:25:18.664 true 00:25:18.664 13:56:50 ocf.ocf_flush -- integrity/flush.sh@1 -- # killprocess 3966205 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@954 -- # '[' -z 3966205 ']' 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@958 -- # kill -0 3966205 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@959 -- # uname 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3966205 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3966205' 00:25:18.664 killing process with pid 3966205 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@973 -- # kill 3966205 00:25:18.664 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@978 -- # wait 3966205 00:25:18.664 Received shutdown signal, test time was about 6.774319 seconds 00:25:18.664 00:25:18.664 Latency(us) 00:25:18.664 [2024-12-05T12:56:50.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.664 Job: MalCache0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:18.664 MalCache0 : 6.77 41933.12 163.80 0.00 0.00 3047.97 365.08 91180.52 00:25:18.664 [2024-12-05T12:56:50.190Z] =================================================================================================================== 00:25:18.664 [2024-12-05T12:56:50.190Z] Total : 41933.12 163.80 0.00 0.00 3047.97 365.08 91180.52 00:25:18.664 [2024-12-05 13:56:50.080126] MalCache0: Flushing cache 00:25:18.664 [2024-12-05 13:56:50.168907] MalCache0: Flushing cache completed 00:25:18.664 [2024-12-05 13:56:50.168986] MalCache0: Stopping cache 00:25:18.923 [2024-12-05 13:56:50.255955] MalCache0: Done saving cache state! 00:25:18.923 [2024-12-05 13:56:50.272615] Cache MalCache0 successfully stopped 00:25:19.494 00:25:19.494 real 0m8.463s 00:25:19.494 user 0m8.715s 00:25:19.494 sys 0m0.731s 00:25:19.494 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.494 13:56:50 ocf.ocf_flush -- common/autotest_common.sh@10 -- # set +x 00:25:19.494 ************************************ 00:25:19.494 END TEST ocf_flush 00:25:19.494 ************************************ 00:25:19.494 13:56:50 ocf -- ocf/ocf.sh@15 -- # run_test ocf_create_destruct /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh 00:25:19.494 13:56:50 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:19.494 13:56:50 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.494 13:56:50 ocf -- common/autotest_common.sh@10 -- # set +x 00:25:19.494 ************************************ 00:25:19.494 START TEST ocf_create_destruct 00:25:19.494 ************************************ 00:25:19.494 13:56:50 ocf.ocf_create_destruct -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh 00:25:19.494 13:56:50 ocf.ocf_create_destruct -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:19.494 13:56:50 ocf.ocf_create_destruct -- common/autotest_common.sh@1711 -- # lcov --version 00:25:19.494 13:56:50 ocf.ocf_create_destruct -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@344 -- # case "$op" in 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@345 -- # : 1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@365 -- # decimal 1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@353 -- # local d=1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@355 -- # echo 1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@366 -- # decimal 2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@353 -- # local d=2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@355 -- # echo 2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- scripts/common.sh@368 -- # return 0 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:19.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.756 --rc genhtml_branch_coverage=1 00:25:19.756 --rc genhtml_function_coverage=1 00:25:19.756 --rc genhtml_legend=1 00:25:19.756 --rc geninfo_all_blocks=1 00:25:19.756 --rc geninfo_unexecuted_blocks=1 00:25:19.756 00:25:19.756 ' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:19.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.756 --rc genhtml_branch_coverage=1 00:25:19.756 --rc genhtml_function_coverage=1 00:25:19.756 --rc genhtml_legend=1 00:25:19.756 --rc geninfo_all_blocks=1 00:25:19.756 --rc geninfo_unexecuted_blocks=1 00:25:19.756 00:25:19.756 ' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:19.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.756 --rc genhtml_branch_coverage=1 00:25:19.756 --rc genhtml_function_coverage=1 00:25:19.756 --rc genhtml_legend=1 00:25:19.756 --rc geninfo_all_blocks=1 00:25:19.756 --rc geninfo_unexecuted_blocks=1 00:25:19.756 00:25:19.756 ' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:19.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.756 --rc genhtml_branch_coverage=1 00:25:19.756 --rc genhtml_function_coverage=1 00:25:19.756 --rc genhtml_legend=1 00:25:19.756 --rc geninfo_all_blocks=1 00:25:19.756 --rc geninfo_unexecuted_blocks=1 00:25:19.756 00:25:19.756 ' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@21 -- # spdk_pid=3967469 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@23 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@25 -- # waitforlisten 3967469 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@835 -- # '[' -z 3967469 ']' 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.756 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@10 -- # set +x 00:25:19.756 [2024-12-05 13:56:51.099930] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:19.756 [2024-12-05 13:56:51.100005] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3967469 ] 00:25:19.756 [2024-12-05 13:56:51.221655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.017 [2024-12-05 13:56:51.277462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.017 [2024-12-05 13:56:51.477383] 'OCF_Core' volume operations registered 00:25:20.017 [2024-12-05 13:56:51.477423] 'OCF_Cache' volume operations registered 00:25:20.017 [2024-12-05 13:56:51.481861] 'OCF Composite' volume operations registered 00:25:20.017 [2024-12-05 13:56:51.486288] 'SPDK_block_device' volume operations registered 00:25:20.276 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.276 13:56:51 ocf.ocf_create_destruct -- common/autotest_common.sh@868 -- # return 0 00:25:20.276 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:20.536 Malloc0 00:25:20.536 13:56:51 ocf.ocf_create_destruct -- management/create-destruct.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:20.795 Malloc1 00:25:20.795 13:56:52 ocf.ocf_create_destruct -- management/create-destruct.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt Malloc0 NonExisting 00:25:21.055 [2024-12-05 13:56:52.486116] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for core device 'NonExisting' to connect 00:25:21.055 PartCache 00:25:21.055 13:56:52 ocf.ocf_create_destruct -- management/create-destruct.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs PartCache 00:25:21.055 13:56:52 ocf.ocf_create_destruct -- management/create-destruct.sh@32 -- # jq -e '.[0] | .started == false and .cache.attached and .core.attached == false' 00:25:21.316 true 00:25:21.316 13:56:52 ocf.ocf_create_destruct -- management/create-destruct.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs NonExisting 00:25:21.316 13:56:52 ocf.ocf_create_destruct -- management/create-destruct.sh@35 -- # jq -e '.[0] | .name == "PartCache"' 00:25:21.575 true 00:25:21.575 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@38 -- # bdev_check_claimed Malloc0 00:25:21.576 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:25:21.576 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:21.835 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:25:21.835 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@14 -- # return 0 00:25:21.835 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete PartCache 00:25:22.094 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@44 -- # bdev_check_claimed Malloc0 00:25:22.094 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:25:22.094 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:22.353 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' false = true ']' 00:25:22.353 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@16 -- # return 1 00:25:22.353 13:56:53 ocf.ocf_create_destruct -- management/create-destruct.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create FullCache wt Malloc0 Malloc1 00:25:22.611 [2024-12-05 13:56:54.096177] Inserting cache FullCache 00:25:22.611 [2024-12-05 13:56:54.096669] FullCache: Metadata initialized 00:25:22.611 [2024-12-05 13:56:54.097115] FullCache: Successfully added 00:25:22.611 [2024-12-05 13:56:54.097130] FullCache: Cache mode : wt 00:25:22.611 [2024-12-05 13:56:54.107971] FullCache: Super block config offset : 0 kiB 00:25:22.611 [2024-12-05 13:56:54.107997] FullCache: Super block config size : 2200 B 00:25:22.611 [2024-12-05 13:56:54.108004] FullCache: Super block runtime offset : 128 kiB 00:25:22.611 [2024-12-05 13:56:54.108010] FullCache: Super block runtime size : 4 B 00:25:22.611 [2024-12-05 13:56:54.108017] FullCache: Reserved offset : 256 kiB 00:25:22.611 [2024-12-05 13:56:54.108024] FullCache: Reserved size : 128 kiB 00:25:22.611 [2024-12-05 13:56:54.108030] FullCache: Part config offset : 384 kiB 00:25:22.611 [2024-12-05 13:56:54.108037] FullCache: Part config size : 48 kiB 00:25:22.611 [2024-12-05 13:56:54.108043] FullCache: Part runtime offset : 640 kiB 00:25:22.611 [2024-12-05 13:56:54.108050] FullCache: Part runtime size : 72 kiB 00:25:22.611 [2024-12-05 13:56:54.108056] FullCache: Core config offset : 768 kiB 00:25:22.611 [2024-12-05 13:56:54.108063] FullCache: Core config size : 512 kiB 00:25:22.611 [2024-12-05 13:56:54.108070] FullCache: Core runtime offset : 1792 kiB 00:25:22.611 [2024-12-05 13:56:54.108076] FullCache: Core runtime size : 1172 kiB 00:25:22.611 [2024-12-05 13:56:54.108090] FullCache: Core UUID offset : 3072 kiB 00:25:22.611 [2024-12-05 13:56:54.108096] FullCache: Core UUID size : 16384 kiB 00:25:22.611 [2024-12-05 13:56:54.108103] FullCache: Cleaning offset : 35840 kiB 00:25:22.611 [2024-12-05 13:56:54.108109] FullCache: Cleaning size : 196 kiB 00:25:22.612 [2024-12-05 13:56:54.108116] FullCache: LRU list offset : 36096 kiB 00:25:22.612 [2024-12-05 13:56:54.108122] FullCache: LRU list size : 148 kiB 00:25:22.612 [2024-12-05 13:56:54.108129] FullCache: Collision offset : 36352 kiB 00:25:22.612 [2024-12-05 13:56:54.108135] FullCache: Collision size : 196 kiB 00:25:22.612 [2024-12-05 13:56:54.108142] FullCache: List info offset : 36608 kiB 00:25:22.612 [2024-12-05 13:56:54.108148] FullCache: List info size : 148 kiB 00:25:22.612 [2024-12-05 13:56:54.108155] FullCache: Hash offset : 36864 kiB 00:25:22.612 [2024-12-05 13:56:54.108161] FullCache: Hash size : 20 kiB 00:25:22.612 [2024-12-05 13:56:54.108168] FullCache: Cache line size: 4 kiB 00:25:22.612 [2024-12-05 13:56:54.108175] FullCache: Metadata size on device: 36992 kiB 00:25:22.612 [2024-12-05 13:56:54.118676] FullCache: Policy 'always' initialized successfully 00:25:22.870 [2024-12-05 13:56:54.233087] FullCache: Done saving cache state! 00:25:22.870 [2024-12-05 13:56:54.265388] FullCache: Cache attached 00:25:22.870 [2024-12-05 13:56:54.265483] FullCache: Successfully attached 00:25:22.870 [2024-12-05 13:56:54.265777] FullCache: Inserting core Malloc1 00:25:22.870 [2024-12-05 13:56:54.265802] FullCache.Malloc1: Seqential cutoff init 00:25:22.870 [2024-12-05 13:56:54.297668] FullCache.Malloc1: Successfully added 00:25:22.870 FullCache 00:25:22.870 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@51 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:22.870 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs FullCache 00:25:23.129 true 00:25:23.129 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc0 00:25:23.129 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:25:23.129 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:23.388 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:25:23.388 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@14 -- # return 0 00:25:23.388 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc1 00:25:23.388 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1 00:25:23.388 13:56:54 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:23.647 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:25:23.647 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@14 -- # return 0 00:25:23.647 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete FullCache 00:25:23.905 [2024-12-05 13:56:55.406895] FullCache: Flushing cache 00:25:23.905 [2024-12-05 13:56:55.406935] FullCache: Flushing cache completed 00:25:23.905 [2024-12-05 13:56:55.407958] FullCache.Malloc1: Removing core 00:25:24.164 [2024-12-05 13:56:55.440952] FullCache: Core Malloc1 successfully removed 00:25:24.164 [2024-12-05 13:56:55.441020] FullCache: Stopping cache 00:25:24.164 [2024-12-05 13:56:55.547528] FullCache: Done saving cache state! 00:25:24.164 [2024-12-05 13:56:55.563929] Cache FullCache successfully stopped 00:25:24.165 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@60 -- # bdev_check_claimed Malloc0 00:25:24.165 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:25:24.165 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:24.424 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' false = true ']' 00:25:24.424 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@16 -- # return 1 00:25:24.424 13:56:55 ocf.ocf_create_destruct -- management/create-destruct.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create HotCache wt Malloc0 Malloc1 00:25:24.683 [2024-12-05 13:56:56.114735] Inserting cache HotCache 00:25:24.683 [2024-12-05 13:56:56.115152] HotCache: Metadata initialized 00:25:24.683 [2024-12-05 13:56:56.115580] HotCache: Successfully added 00:25:24.683 [2024-12-05 13:56:56.115589] HotCache: Cache mode : wt 00:25:24.683 [2024-12-05 13:56:56.125555] HotCache: Super block config offset : 0 kiB 00:25:24.683 [2024-12-05 13:56:56.125578] HotCache: Super block config size : 2200 B 00:25:24.683 [2024-12-05 13:56:56.125586] HotCache: Super block runtime offset : 128 kiB 00:25:24.683 [2024-12-05 13:56:56.125592] HotCache: Super block runtime size : 4 B 00:25:24.683 [2024-12-05 13:56:56.125599] HotCache: Reserved offset : 256 kiB 00:25:24.683 [2024-12-05 13:56:56.125606] HotCache: Reserved size : 128 kiB 00:25:24.683 [2024-12-05 13:56:56.125612] HotCache: Part config offset : 384 kiB 00:25:24.683 [2024-12-05 13:56:56.125619] HotCache: Part config size : 48 kiB 00:25:24.683 [2024-12-05 13:56:56.125625] HotCache: Part runtime offset : 640 kiB 00:25:24.683 [2024-12-05 13:56:56.125640] HotCache: Part runtime size : 72 kiB 00:25:24.683 [2024-12-05 13:56:56.125647] HotCache: Core config offset : 768 kiB 00:25:24.683 [2024-12-05 13:56:56.125653] HotCache: Core config size : 512 kiB 00:25:24.683 [2024-12-05 13:56:56.125660] HotCache: Core runtime offset : 1792 kiB 00:25:24.683 [2024-12-05 13:56:56.125666] HotCache: Core runtime size : 1172 kiB 00:25:24.683 [2024-12-05 13:56:56.125673] HotCache: Core UUID offset : 3072 kiB 00:25:24.683 [2024-12-05 13:56:56.125679] HotCache: Core UUID size : 16384 kiB 00:25:24.683 [2024-12-05 13:56:56.125686] HotCache: Cleaning offset : 35840 kiB 00:25:24.683 [2024-12-05 13:56:56.125692] HotCache: Cleaning size : 196 kiB 00:25:24.683 [2024-12-05 13:56:56.125699] HotCache: LRU list offset : 36096 kiB 00:25:24.683 [2024-12-05 13:56:56.125705] HotCache: LRU list size : 148 kiB 00:25:24.683 [2024-12-05 13:56:56.125711] HotCache: Collision offset : 36352 kiB 00:25:24.683 [2024-12-05 13:56:56.125718] HotCache: Collision size : 196 kiB 00:25:24.683 [2024-12-05 13:56:56.125724] HotCache: List info offset : 36608 kiB 00:25:24.683 [2024-12-05 13:56:56.125731] HotCache: List info size : 148 kiB 00:25:24.683 [2024-12-05 13:56:56.125737] HotCache: Hash offset : 36864 kiB 00:25:24.683 [2024-12-05 13:56:56.125744] HotCache: Hash size : 20 kiB 00:25:24.683 [2024-12-05 13:56:56.125751] HotCache: Cache line size: 4 kiB 00:25:24.683 [2024-12-05 13:56:56.125758] HotCache: Metadata size on device: 36992 kiB 00:25:24.683 [2024-12-05 13:56:56.135362] HotCache: Policy 'always' initialized successfully 00:25:24.941 [2024-12-05 13:56:56.248838] HotCache: Done saving cache state! 00:25:24.941 [2024-12-05 13:56:56.279838] HotCache: Cache attached 00:25:24.941 [2024-12-05 13:56:56.279936] HotCache: Successfully attached 00:25:24.941 [2024-12-05 13:56:56.280220] HotCache: Inserting core Malloc1 00:25:24.941 [2024-12-05 13:56:56.280242] HotCache.Malloc1: Seqential cutoff init 00:25:24.941 [2024-12-05 13:56:56.311450] HotCache.Malloc1: Successfully added 00:25:24.941 HotCache 00:25:24.941 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc0 00:25:24.941 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:25:24.941 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:25.199 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:25:25.199 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@14 -- # return 0 00:25:25.199 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc1 00:25:25.199 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1 00:25:25.199 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:25.459 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:25:25.459 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@14 -- # return 0 00:25:25.459 13:56:56 ocf.ocf_create_destruct -- management/create-destruct.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:25.718 [2024-12-05 13:56:57.039426] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'HotCache' because its cache device 'Malloc0' was removed 00:25:25.718 [2024-12-05 13:56:57.039723] HotCache: Flushing cache 00:25:25.718 [2024-12-05 13:56:57.039743] HotCache: Flushing cache completed 00:25:25.718 [2024-12-05 13:56:57.039828] HotCache: Stopping cache 00:25:25.718 [2024-12-05 13:56:57.148097] HotCache: Done saving cache state! 00:25:25.718 [2024-12-05 13:56:57.164187] Cache HotCache successfully stopped 00:25:25.718 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@74 -- # bdev_check_claimed Malloc1 00:25:25.718 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1 00:25:25.718 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:25:25.977 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@13 -- # '[' false = true ']' 00:25:25.977 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@16 -- # return 1 00:25:26.236 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:25:26.496 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@79 -- # status='[ 00:25:26.496 { 00:25:26.496 "name": "Malloc1", 00:25:26.496 "aliases": [ 00:25:26.496 "6530f0f4-1d15-4ee0-85fc-c795e2080ee9" 00:25:26.496 ], 00:25:26.496 "product_name": "Malloc disk", 00:25:26.496 "block_size": 512, 00:25:26.496 "num_blocks": 206848, 00:25:26.496 "uuid": "6530f0f4-1d15-4ee0-85fc-c795e2080ee9", 00:25:26.496 "assigned_rate_limits": { 00:25:26.496 "rw_ios_per_sec": 0, 00:25:26.496 "rw_mbytes_per_sec": 0, 00:25:26.496 "r_mbytes_per_sec": 0, 00:25:26.496 "w_mbytes_per_sec": 0 00:25:26.496 }, 00:25:26.496 "claimed": false, 00:25:26.496 "zoned": false, 00:25:26.496 "supported_io_types": { 00:25:26.496 "read": true, 00:25:26.496 "write": true, 00:25:26.496 "unmap": true, 00:25:26.496 "flush": true, 00:25:26.496 "reset": true, 00:25:26.496 "nvme_admin": false, 00:25:26.496 "nvme_io": false, 00:25:26.496 "nvme_io_md": false, 00:25:26.496 "write_zeroes": true, 00:25:26.496 "zcopy": true, 00:25:26.496 "get_zone_info": false, 00:25:26.496 "zone_management": false, 00:25:26.496 "zone_append": false, 00:25:26.496 "compare": false, 00:25:26.496 "compare_and_write": false, 00:25:26.496 "abort": true, 00:25:26.496 "seek_hole": false, 00:25:26.496 "seek_data": false, 00:25:26.496 "copy": true, 00:25:26.496 "nvme_iov_md": false 00:25:26.496 }, 00:25:26.496 "memory_domains": [ 00:25:26.496 { 00:25:26.496 "dma_device_id": "system", 00:25:26.496 "dma_device_type": 1 00:25:26.497 }, 00:25:26.497 { 00:25:26.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.497 "dma_device_type": 2 00:25:26.497 } 00:25:26.497 ], 00:25:26.497 "driver_specific": {} 00:25:26.497 } 00:25:26.497 ]' 00:25:26.497 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@80 -- # echo '[' '{' '"name":' '"Malloc1",' '"aliases":' '[' '"6530f0f4-1d15-4ee0-85fc-c795e2080ee9"' '],' '"product_name":' '"Malloc' 'disk",' '"block_size":' 512, '"num_blocks":' 206848, '"uuid":' '"6530f0f4-1d15-4ee0-85fc-c795e2080ee9",' '"assigned_rate_limits":' '{' '"rw_ios_per_sec":' 0, '"rw_mbytes_per_sec":' 0, '"r_mbytes_per_sec":' 0, '"w_mbytes_per_sec":' 0 '},' '"claimed":' false, '"zoned":' false, '"supported_io_types":' '{' '"read":' true, '"write":' true, '"unmap":' true, '"flush":' true, '"reset":' true, '"nvme_admin":' false, '"nvme_io":' false, '"nvme_io_md":' false, '"write_zeroes":' true, '"zcopy":' true, '"get_zone_info":' false, '"zone_management":' false, '"zone_append":' false, '"compare":' false, '"compare_and_write":' false, '"abort":' true, '"seek_hole":' false, '"seek_data":' false, '"copy":' true, '"nvme_iov_md":' false '},' '"memory_domains":' '[' '{' '"dma_device_id":' '"system",' '"dma_device_type":' 1 '},' '{' '"dma_device_id":' '"SPDK_ACCEL_DMA_DEVICE",' '"dma_device_type":' 2 '}' '],' '"driver_specific":' '{}' '}' ']' 00:25:26.497 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@80 -- # jq 'map(select(.name == "HotCache")) == []' 00:25:26.497 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@80 -- # gone=true 00:25:26.497 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@81 -- # [[ true == false ]] 00:25:26.497 13:56:57 ocf.ocf_create_destruct -- management/create-destruct.sh@87 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt NonExisting Malloc1 00:25:26.756 [2024-12-05 13:56:58.063007] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for cache device 'NonExisting' to connect 00:25:26.756 PartCache 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- management/create-destruct.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- management/create-destruct.sh@91 -- # killprocess 3967469 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@954 -- # '[' -z 3967469 ']' 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@958 -- # kill -0 3967469 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@959 -- # uname 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3967469 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.756 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3967469' 00:25:26.756 killing process with pid 3967469 00:25:26.757 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@973 -- # kill 3967469 00:25:26.757 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@978 -- # wait 3967469 00:25:27.016 [2024-12-05 13:56:58.298779] bdev.c:2555:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc1'! 00:25:27.016 [2024-12-05 13:56:58.298896] vbdev_ocf.c:1361:hotremove_cb: *NOTICE*: Deinitializing 'PartCache' because its core device 'Malloc1' was removed 00:25:27.321 00:25:27.321 real 0m7.806s 00:25:27.321 user 0m12.510s 00:25:27.321 sys 0m1.507s 00:25:27.321 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.321 13:56:58 ocf.ocf_create_destruct -- common/autotest_common.sh@10 -- # set +x 00:25:27.321 ************************************ 00:25:27.321 END TEST ocf_create_destruct 00:25:27.321 ************************************ 00:25:27.321 13:56:58 ocf -- ocf/ocf.sh@16 -- # run_test ocf_multicore /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh 00:25:27.321 13:56:58 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:27.321 13:56:58 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.321 13:56:58 ocf -- common/autotest_common.sh@10 -- # set +x 00:25:27.321 ************************************ 00:25:27.321 START TEST ocf_multicore 00:25:27.321 ************************************ 00:25:27.321 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh 00:25:27.613 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:27.613 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1711 -- # lcov --version 00:25:27.613 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:27.613 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@344 -- # case "$op" in 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@345 -- # : 1 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@365 -- # decimal 1 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@353 -- # local d=1 00:25:27.613 13:56:58 ocf.ocf_multicore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@355 -- # echo 1 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@366 -- # decimal 2 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@353 -- # local d=2 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@355 -- # echo 2 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.614 13:56:58 ocf.ocf_multicore -- scripts/common.sh@368 -- # return 0 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:27.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.614 --rc genhtml_branch_coverage=1 00:25:27.614 --rc genhtml_function_coverage=1 00:25:27.614 --rc genhtml_legend=1 00:25:27.614 --rc geninfo_all_blocks=1 00:25:27.614 --rc geninfo_unexecuted_blocks=1 00:25:27.614 00:25:27.614 ' 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:27.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.614 --rc genhtml_branch_coverage=1 00:25:27.614 --rc genhtml_function_coverage=1 00:25:27.614 --rc genhtml_legend=1 00:25:27.614 --rc geninfo_all_blocks=1 00:25:27.614 --rc geninfo_unexecuted_blocks=1 00:25:27.614 00:25:27.614 ' 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:27.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.614 --rc genhtml_branch_coverage=1 00:25:27.614 --rc genhtml_function_coverage=1 00:25:27.614 --rc genhtml_legend=1 00:25:27.614 --rc geninfo_all_blocks=1 00:25:27.614 --rc geninfo_unexecuted_blocks=1 00:25:27.614 00:25:27.614 ' 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:27.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.614 --rc genhtml_branch_coverage=1 00:25:27.614 --rc genhtml_function_coverage=1 00:25:27.614 --rc genhtml_legend=1 00:25:27.614 --rc geninfo_all_blocks=1 00:25:27.614 --rc geninfo_unexecuted_blocks=1 00:25:27.614 00:25:27.614 ' 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@12 -- # spdk_pid='?' 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@24 -- # start_spdk 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@15 -- # spdk_pid=3968516 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:25:27.614 13:56:58 ocf.ocf_multicore -- management/multicore.sh@17 -- # waitforlisten 3968516 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@835 -- # '[' -z 3968516 ']' 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.614 13:56:58 ocf.ocf_multicore -- common/autotest_common.sh@10 -- # set +x 00:25:27.614 [2024-12-05 13:56:58.999626] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:27.614 [2024-12-05 13:56:58.999717] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3968516 ] 00:25:27.614 [2024-12-05 13:56:59.121167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.894 [2024-12-05 13:56:59.177089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.894 [2024-12-05 13:56:59.378122] 'OCF_Core' volume operations registered 00:25:27.895 [2024-12-05 13:56:59.378164] 'OCF_Cache' volume operations registered 00:25:27.895 [2024-12-05 13:56:59.382544] 'OCF Composite' volume operations registered 00:25:27.895 [2024-12-05 13:56:59.386907] 'SPDK_block_device' volume operations registered 00:25:28.154 13:56:59 ocf.ocf_multicore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.154 13:56:59 ocf.ocf_multicore -- common/autotest_common.sh@868 -- # return 0 00:25:28.154 13:56:59 ocf.ocf_multicore -- management/multicore.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core0 00:25:28.414 Core0 00:25:28.414 13:56:59 ocf.ocf_multicore -- management/multicore.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core1 00:25:28.673 Core1 00:25:28.673 13:57:00 ocf.ocf_multicore -- management/multicore.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Core0 00:25:28.932 [2024-12-05 13:57:00.308562] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for cache device 'Cache' to connect 00:25:28.932 C1 00:25:28.932 13:57:00 ocf.ocf_multicore -- management/multicore.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1 00:25:29.192 [2024-12-05 13:57:00.577278] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for cache device 'Cache' to connect 00:25:29.192 C2 00:25:29.192 13:57:00 ocf.ocf_multicore -- management/multicore.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:29.192 13:57:00 ocf.ocf_multicore -- management/multicore.sh@34 -- # jq -e 'any(select(.started)) == false' 00:25:29.451 true 00:25:29.451 13:57:00 ocf.ocf_multicore -- management/multicore.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache 00:25:29.711 [2024-12-05 13:57:01.051203] Inserting cache C1 00:25:29.711 [2024-12-05 13:57:01.051619] C1: Metadata initialized 00:25:29.711 [2024-12-05 13:57:01.052071] C1: Successfully added 00:25:29.711 [2024-12-05 13:57:01.052087] C1: Cache mode : wt 00:25:29.711 [2024-12-05 13:57:01.052112] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache 00:25:29.711 Cache 00:25:29.711 [2024-12-05 13:57:01.062792] C1: Super block config offset : 0 kiB 00:25:29.711 [2024-12-05 13:57:01.062823] C1: Super block config size : 2200 B 00:25:29.711 [2024-12-05 13:57:01.062830] C1: Super block runtime offset : 128 kiB 00:25:29.711 [2024-12-05 13:57:01.062837] C1: Super block runtime size : 4 B 00:25:29.711 [2024-12-05 13:57:01.062843] C1: Reserved offset : 256 kiB 00:25:29.711 [2024-12-05 13:57:01.062850] C1: Reserved size : 128 kiB 00:25:29.711 [2024-12-05 13:57:01.062857] C1: Part config offset : 384 kiB 00:25:29.711 [2024-12-05 13:57:01.062863] C1: Part config size : 48 kiB 00:25:29.711 [2024-12-05 13:57:01.062870] C1: Part runtime offset : 640 kiB 00:25:29.711 [2024-12-05 13:57:01.062876] C1: Part runtime size : 72 kiB 00:25:29.711 [2024-12-05 13:57:01.062883] C1: Core config offset : 768 kiB 00:25:29.711 [2024-12-05 13:57:01.062889] C1: Core config size : 512 kiB 00:25:29.711 [2024-12-05 13:57:01.062896] C1: Core runtime offset : 1792 kiB 00:25:29.711 [2024-12-05 13:57:01.062902] C1: Core runtime size : 1172 kiB 00:25:29.711 [2024-12-05 13:57:01.062909] C1: Core UUID offset : 3072 kiB 00:25:29.711 [2024-12-05 13:57:01.062915] C1: Core UUID size : 16384 kiB 00:25:29.711 [2024-12-05 13:57:01.062922] C1: Cleaning offset : 35840 kiB 00:25:29.711 [2024-12-05 13:57:01.062928] C1: Cleaning size : 196 kiB 00:25:29.711 [2024-12-05 13:57:01.062935] C1: LRU list offset : 36096 kiB 00:25:29.711 [2024-12-05 13:57:01.062941] C1: LRU list size : 148 kiB 00:25:29.711 [2024-12-05 13:57:01.062947] C1: Collision offset : 36352 kiB 00:25:29.711 [2024-12-05 13:57:01.062954] C1: Collision size : 196 kiB 00:25:29.711 [2024-12-05 13:57:01.062960] C1: List info offset : 36608 kiB 00:25:29.711 [2024-12-05 13:57:01.062967] C1: List info size : 148 kiB 00:25:29.711 [2024-12-05 13:57:01.062973] C1: Hash offset : 36864 kiB 00:25:29.711 [2024-12-05 13:57:01.062980] C1: Hash size : 20 kiB 00:25:29.711 [2024-12-05 13:57:01.062987] C1: Cache line size: 4 kiB 00:25:29.711 [2024-12-05 13:57:01.062993] C1: Metadata size on device: 36992 kiB 00:25:29.711 [2024-12-05 13:57:01.073355] C1: Policy 'always' initialized successfully 00:25:29.711 13:57:01 ocf.ocf_multicore -- management/multicore.sh@39 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:29.711 13:57:01 ocf.ocf_multicore -- management/multicore.sh@39 -- # jq -e 'all(select(.started)) == true' 00:25:29.711 [2024-12-05 13:57:01.187130] C1: Done saving cache state! 00:25:29.711 [2024-12-05 13:57:01.218493] C1: Cache attached 00:25:29.711 [2024-12-05 13:57:01.218591] C1: Successfully attached 00:25:29.711 [2024-12-05 13:57:01.218877] C1: Inserting core Core1 00:25:29.711 [2024-12-05 13:57:01.218902] C1.Core1: Seqential cutoff init 00:25:29.970 [2024-12-05 13:57:01.250624] C1.Core1: Successfully added 00:25:29.970 [2024-12-05 13:57:01.251430] C1: Inserting core Core0 00:25:29.971 [2024-12-05 13:57:01.251463] C1.Core0: Seqential cutoff init 00:25:29.971 [2024-12-05 13:57:01.283917] C1.Core0: Successfully added 00:25:29.971 true 00:25:29.971 13:57:01 ocf.ocf_multicore -- management/multicore.sh@43 -- # waitforbdev C2 00:25:29.971 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@903 -- # local bdev_name=C2 00:25:29.971 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:29.971 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@905 -- # local i 00:25:29.971 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:29.971 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:29.971 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:30.230 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b C2 -t 2000 00:25:30.490 [ 00:25:30.490 { 00:25:30.490 "name": "C2", 00:25:30.490 "aliases": [ 00:25:30.490 "88292720-5108-5f95-9f21-16b6bfa89cc5" 00:25:30.490 ], 00:25:30.490 "product_name": "SPDK OCF", 00:25:30.490 "block_size": 512, 00:25:30.490 "num_blocks": 2048, 00:25:30.490 "uuid": "88292720-5108-5f95-9f21-16b6bfa89cc5", 00:25:30.490 "assigned_rate_limits": { 00:25:30.490 "rw_ios_per_sec": 0, 00:25:30.490 "rw_mbytes_per_sec": 0, 00:25:30.490 "r_mbytes_per_sec": 0, 00:25:30.490 "w_mbytes_per_sec": 0 00:25:30.490 }, 00:25:30.490 "claimed": false, 00:25:30.490 "zoned": false, 00:25:30.490 "supported_io_types": { 00:25:30.490 "read": true, 00:25:30.490 "write": true, 00:25:30.490 "unmap": true, 00:25:30.490 "flush": true, 00:25:30.490 "reset": false, 00:25:30.490 "nvme_admin": false, 00:25:30.490 "nvme_io": false, 00:25:30.490 "nvme_io_md": false, 00:25:30.490 "write_zeroes": true, 00:25:30.490 "zcopy": false, 00:25:30.490 "get_zone_info": false, 00:25:30.490 "zone_management": false, 00:25:30.490 "zone_append": false, 00:25:30.490 "compare": false, 00:25:30.490 "compare_and_write": false, 00:25:30.490 "abort": false, 00:25:30.490 "seek_hole": false, 00:25:30.490 "seek_data": false, 00:25:30.490 "copy": false, 00:25:30.490 "nvme_iov_md": false 00:25:30.490 }, 00:25:30.490 "driver_specific": { 00:25:30.490 "cache_device": "Cache", 00:25:30.490 "core_device": "Core1", 00:25:30.490 "mode": "wt", 00:25:30.490 "cache_line_size": 4, 00:25:30.490 "metadata_volatile": false 00:25:30.490 } 00:25:30.490 } 00:25:30.490 ] 00:25:30.490 13:57:01 ocf.ocf_multicore -- common/autotest_common.sh@911 -- # return 0 00:25:30.490 13:57:01 ocf.ocf_multicore -- management/multicore.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete C2 00:25:30.749 [2024-12-05 13:57:02.080000] C1: Flushing cache 00:25:30.749 [2024-12-05 13:57:02.080035] C1: Flushing cache completed 00:25:30.749 [2024-12-05 13:57:02.081047] C1.Core1: Removing core 00:25:30.749 [2024-12-05 13:57:02.113385] C1: Core Core1 successfully removed 00:25:30.749 [2024-12-05 13:57:02.113440] vbdev_ocf.c: 299:stop_vbdev: *NOTICE*: Not stopping cache instance 'Cache' because it is referenced by other OCF bdev 00:25:30.749 13:57:02 ocf.ocf_multicore -- management/multicore.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C1 00:25:30.749 13:57:02 ocf.ocf_multicore -- management/multicore.sh@49 -- # jq -e '.[0] | .started' 00:25:31.008 true 00:25:31.008 13:57:02 ocf.ocf_multicore -- management/multicore.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1 00:25:31.267 [2024-12-05 13:57:02.660784] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache 00:25:31.267 [2024-12-05 13:57:02.661020] C1: Inserting core Core1 00:25:31.267 [2024-12-05 13:57:02.661042] C1.Core1: Seqential cutoff init 00:25:31.267 [2024-12-05 13:57:02.693003] C1.Core1: Successfully added 00:25:31.267 C2 00:25:31.267 13:57:02 ocf.ocf_multicore -- management/multicore.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C2 00:25:31.267 13:57:02 ocf.ocf_multicore -- management/multicore.sh@54 -- # jq -e '.[0] | .started' 00:25:31.526 true 00:25:31.526 13:57:02 ocf.ocf_multicore -- management/multicore.sh@59 -- # stop_spdk 00:25:31.526 13:57:02 ocf.ocf_multicore -- management/multicore.sh@20 -- # killprocess 3968516 00:25:31.526 13:57:02 ocf.ocf_multicore -- common/autotest_common.sh@954 -- # '[' -z 3968516 ']' 00:25:31.526 13:57:02 ocf.ocf_multicore -- common/autotest_common.sh@958 -- # kill -0 3968516 00:25:31.526 13:57:02 ocf.ocf_multicore -- common/autotest_common.sh@959 -- # uname 00:25:31.526 13:57:02 ocf.ocf_multicore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.526 13:57:02 ocf.ocf_multicore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3968516 00:25:31.785 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:31.785 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:31.785 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3968516' 00:25:31.785 killing process with pid 3968516 00:25:31.785 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@973 -- # kill 3968516 00:25:31.785 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@978 -- # wait 3968516 00:25:31.785 [2024-12-05 13:57:03.196301] C1: Flushing cache 00:25:31.785 [2024-12-05 13:57:03.196349] C1: Flushing cache completed 00:25:31.785 [2024-12-05 13:57:03.196404] C1: Stopping cache 00:25:31.785 [2024-12-05 13:57:03.303673] C1: Done saving cache state! 00:25:32.045 [2024-12-05 13:57:03.318831] Cache C1 successfully stopped 00:25:32.304 13:57:03 ocf.ocf_multicore -- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT 00:25:32.304 13:57:03 ocf.ocf_multicore -- management/multicore.sh@62 -- # start_spdk 00:25:32.304 13:57:03 ocf.ocf_multicore -- management/multicore.sh@15 -- # spdk_pid=3969288 00:25:32.304 13:57:03 ocf.ocf_multicore -- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:32.304 13:57:03 ocf.ocf_multicore -- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:25:32.304 13:57:03 ocf.ocf_multicore -- management/multicore.sh@17 -- # waitforlisten 3969288 00:25:32.304 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@835 -- # '[' -z 3969288 ']' 00:25:32.304 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.304 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.304 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.304 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.304 13:57:03 ocf.ocf_multicore -- common/autotest_common.sh@10 -- # set +x 00:25:32.304 [2024-12-05 13:57:03.708288] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:32.304 [2024-12-05 13:57:03.708364] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3969288 ] 00:25:32.562 [2024-12-05 13:57:03.830086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.562 [2024-12-05 13:57:03.886436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.821 [2024-12-05 13:57:04.094370] 'OCF_Core' volume operations registered 00:25:32.821 [2024-12-05 13:57:04.094405] 'OCF_Cache' volume operations registered 00:25:32.821 [2024-12-05 13:57:04.098846] 'OCF Composite' volume operations registered 00:25:32.821 [2024-12-05 13:57:04.103318] 'SPDK_block_device' volume operations registered 00:25:32.821 13:57:04 ocf.ocf_multicore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.821 13:57:04 ocf.ocf_multicore -- common/autotest_common.sh@868 -- # return 0 00:25:32.821 13:57:04 ocf.ocf_multicore -- management/multicore.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache 00:25:33.079 Cache 00:25:33.079 13:57:04 ocf.ocf_multicore -- management/multicore.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc 00:25:33.338 Malloc 00:25:33.338 13:57:04 ocf.ocf_multicore -- management/multicore.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core 00:25:33.596 Core 00:25:33.596 13:57:05 ocf.ocf_multicore -- management/multicore.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Malloc 00:25:33.855 [2024-12-05 13:57:05.367494] Inserting cache C1 00:25:33.855 [2024-12-05 13:57:05.367926] C1: Metadata initialized 00:25:33.855 [2024-12-05 13:57:05.368367] C1: Successfully added 00:25:33.855 [2024-12-05 13:57:05.368381] C1: Cache mode : wt 00:25:34.113 [2024-12-05 13:57:05.378303] C1: Super block config offset : 0 kiB 00:25:34.113 [2024-12-05 13:57:05.378331] C1: Super block config size : 2200 B 00:25:34.113 [2024-12-05 13:57:05.378338] C1: Super block runtime offset : 128 kiB 00:25:34.113 [2024-12-05 13:57:05.378345] C1: Super block runtime size : 4 B 00:25:34.113 [2024-12-05 13:57:05.378351] C1: Reserved offset : 256 kiB 00:25:34.113 [2024-12-05 13:57:05.378358] C1: Reserved size : 128 kiB 00:25:34.113 [2024-12-05 13:57:05.378364] C1: Part config offset : 384 kiB 00:25:34.113 [2024-12-05 13:57:05.378371] C1: Part config size : 48 kiB 00:25:34.113 [2024-12-05 13:57:05.378377] C1: Part runtime offset : 640 kiB 00:25:34.113 [2024-12-05 13:57:05.378384] C1: Part runtime size : 72 kiB 00:25:34.113 [2024-12-05 13:57:05.378390] C1: Core config offset : 768 kiB 00:25:34.113 [2024-12-05 13:57:05.378396] C1: Core config size : 512 kiB 00:25:34.113 [2024-12-05 13:57:05.378403] C1: Core runtime offset : 1792 kiB 00:25:34.113 [2024-12-05 13:57:05.378410] C1: Core runtime size : 1172 kiB 00:25:34.113 [2024-12-05 13:57:05.378416] C1: Core UUID offset : 3072 kiB 00:25:34.113 [2024-12-05 13:57:05.378423] C1: Core UUID size : 16384 kiB 00:25:34.113 [2024-12-05 13:57:05.378430] C1: Cleaning offset : 35840 kiB 00:25:34.113 [2024-12-05 13:57:05.378436] C1: Cleaning size : 196 kiB 00:25:34.113 [2024-12-05 13:57:05.378443] C1: LRU list offset : 36096 kiB 00:25:34.113 [2024-12-05 13:57:05.378449] C1: LRU list size : 148 kiB 00:25:34.113 [2024-12-05 13:57:05.378456] C1: Collision offset : 36352 kiB 00:25:34.113 [2024-12-05 13:57:05.378462] C1: Collision size : 196 kiB 00:25:34.113 [2024-12-05 13:57:05.378469] C1: List info offset : 36608 kiB 00:25:34.113 [2024-12-05 13:57:05.378475] C1: List info size : 148 kiB 00:25:34.113 [2024-12-05 13:57:05.378482] C1: Hash offset : 36864 kiB 00:25:34.113 [2024-12-05 13:57:05.378488] C1: Hash size : 20 kiB 00:25:34.113 [2024-12-05 13:57:05.378496] C1: Cache line size: 4 kiB 00:25:34.113 [2024-12-05 13:57:05.378502] C1: Metadata size on device: 36992 kiB 00:25:34.113 [2024-12-05 13:57:05.388109] C1: Policy 'always' initialized successfully 00:25:34.113 [2024-12-05 13:57:05.501345] C1: Done saving cache state! 00:25:34.113 [2024-12-05 13:57:05.532243] C1: Cache attached 00:25:34.113 [2024-12-05 13:57:05.532338] C1: Successfully attached 00:25:34.113 [2024-12-05 13:57:05.532612] C1: Inserting core Malloc 00:25:34.113 [2024-12-05 13:57:05.532645] C1.Malloc: Seqential cutoff init 00:25:34.113 [2024-12-05 13:57:05.563266] C1.Malloc: Successfully added 00:25:34.113 C1 00:25:34.113 13:57:05 ocf.ocf_multicore -- management/multicore.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core 00:25:34.371 [2024-12-05 13:57:05.834162] vbdev_ocf.c:1086:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache 00:25:34.371 [2024-12-05 13:57:05.834427] C1: Inserting core Core 00:25:34.371 [2024-12-05 13:57:05.834452] C1.Core: Seqential cutoff init 00:25:34.371 [2024-12-05 13:57:05.867791] C1.Core: Successfully added 00:25:34.371 C2 00:25:34.629 13:57:05 ocf.ocf_multicore -- management/multicore.sh@71 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs Cache 00:25:34.629 13:57:05 ocf.ocf_multicore -- management/multicore.sh@72 -- # jq 'length == 2' 00:25:34.887 true 00:25:34.887 13:57:06 ocf.ocf_multicore -- management/multicore.sh@74 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Cache 00:25:35.146 [2024-12-05 13:57:06.411110] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Cache' was removed 00:25:35.146 [2024-12-05 13:57:06.411155] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Cache' was removed 00:25:35.146 [2024-12-05 13:57:06.411422] C1: Flushing cache 00:25:35.146 [2024-12-05 13:57:06.411441] C1: Flushing cache completed 00:25:35.146 [2024-12-05 13:57:06.411728] C1: Flushing cache 00:25:35.146 [2024-12-05 13:57:06.411739] C1: Flushing cache completed 00:25:35.146 [2024-12-05 13:57:06.411830] C1: Stopping cache 00:25:35.146 [2024-12-05 13:57:06.518964] C1: Done saving cache state! 00:25:35.146 [2024-12-05 13:57:06.533645] Cache C1 successfully stopped 00:25:35.146 13:57:06 ocf.ocf_multicore -- management/multicore.sh@76 -- # jq -e '. == []' 00:25:35.146 13:57:06 ocf.ocf_multicore -- management/multicore.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:35.404 true 00:25:35.404 13:57:06 ocf.ocf_multicore -- management/multicore.sh@81 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Malloc NonExisting 00:25:35.662 [2024-12-05 13:57:07.021968] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for core device 'NonExisting' to connect 00:25:35.662 C1 00:25:35.662 13:57:07 ocf.ocf_multicore -- management/multicore.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Malloc NonExisting 00:25:35.920 [2024-12-05 13:57:07.198440] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for core device 'NonExisting' to connect 00:25:35.920 C2 00:25:35.920 13:57:07 ocf.ocf_multicore -- management/multicore.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C3 wt Malloc Core 00:25:36.179 [2024-12-05 13:57:07.463200] Inserting cache C3 00:25:36.179 [2024-12-05 13:57:07.463612] C3: Metadata initialized 00:25:36.179 [2024-12-05 13:57:07.464052] C3: Successfully added 00:25:36.179 [2024-12-05 13:57:07.464060] C3: Cache mode : wt 00:25:36.179 [2024-12-05 13:57:07.473973] C3: Super block config offset : 0 kiB 00:25:36.179 [2024-12-05 13:57:07.473995] C3: Super block config size : 2200 B 00:25:36.179 [2024-12-05 13:57:07.474002] C3: Super block runtime offset : 128 kiB 00:25:36.179 [2024-12-05 13:57:07.474009] C3: Super block runtime size : 4 B 00:25:36.179 [2024-12-05 13:57:07.474016] C3: Reserved offset : 256 kiB 00:25:36.179 [2024-12-05 13:57:07.474022] C3: Reserved size : 128 kiB 00:25:36.179 [2024-12-05 13:57:07.474029] C3: Part config offset : 384 kiB 00:25:36.179 [2024-12-05 13:57:07.474035] C3: Part config size : 48 kiB 00:25:36.179 [2024-12-05 13:57:07.474042] C3: Part runtime offset : 640 kiB 00:25:36.179 [2024-12-05 13:57:07.474048] C3: Part runtime size : 72 kiB 00:25:36.179 [2024-12-05 13:57:07.474055] C3: Core config offset : 768 kiB 00:25:36.179 [2024-12-05 13:57:07.474061] C3: Core config size : 512 kiB 00:25:36.179 [2024-12-05 13:57:07.474068] C3: Core runtime offset : 1792 kiB 00:25:36.179 [2024-12-05 13:57:07.474074] C3: Core runtime size : 1172 kiB 00:25:36.179 [2024-12-05 13:57:07.474081] C3: Core UUID offset : 3072 kiB 00:25:36.179 [2024-12-05 13:57:07.474087] C3: Core UUID size : 16384 kiB 00:25:36.179 [2024-12-05 13:57:07.474093] C3: Cleaning offset : 35840 kiB 00:25:36.179 [2024-12-05 13:57:07.474100] C3: Cleaning size : 196 kiB 00:25:36.179 [2024-12-05 13:57:07.474106] C3: LRU list offset : 36096 kiB 00:25:36.179 [2024-12-05 13:57:07.474112] C3: LRU list size : 148 kiB 00:25:36.179 [2024-12-05 13:57:07.474119] C3: Collision offset : 36352 kiB 00:25:36.179 [2024-12-05 13:57:07.474125] C3: Collision size : 196 kiB 00:25:36.179 [2024-12-05 13:57:07.474132] C3: List info offset : 36608 kiB 00:25:36.179 [2024-12-05 13:57:07.474138] C3: List info size : 148 kiB 00:25:36.179 [2024-12-05 13:57:07.474144] C3: Hash offset : 36864 kiB 00:25:36.179 [2024-12-05 13:57:07.474151] C3: Hash size : 20 kiB 00:25:36.179 [2024-12-05 13:57:07.474158] C3: Cache line size: 4 kiB 00:25:36.179 [2024-12-05 13:57:07.474165] C3: Metadata size on device: 36992 kiB 00:25:36.179 [2024-12-05 13:57:07.483751] C3: Policy 'always' initialized successfully 00:25:36.179 [2024-12-05 13:57:07.597527] C3: Done saving cache state! 00:25:36.179 [2024-12-05 13:57:07.629311] C3: Cache attached 00:25:36.179 [2024-12-05 13:57:07.629407] C3: Successfully attached 00:25:36.179 [2024-12-05 13:57:07.629703] C3: Inserting core Core 00:25:36.179 [2024-12-05 13:57:07.629727] C3.Core: Seqential cutoff init 00:25:36.179 [2024-12-05 13:57:07.660670] C3.Core: Successfully added 00:25:36.179 C3 00:25:36.179 13:57:07 ocf.ocf_multicore -- management/multicore.sh@85 -- # stop_spdk 00:25:36.179 13:57:07 ocf.ocf_multicore -- management/multicore.sh@20 -- # killprocess 3969288 00:25:36.179 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@954 -- # '[' -z 3969288 ']' 00:25:36.179 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@958 -- # kill -0 3969288 00:25:36.179 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@959 -- # uname 00:25:36.179 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.179 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3969288 00:25:36.439 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:36.439 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:36.439 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3969288' 00:25:36.439 killing process with pid 3969288 00:25:36.439 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@973 -- # kill 3969288 00:25:36.439 13:57:07 ocf.ocf_multicore -- common/autotest_common.sh@978 -- # wait 3969288 00:25:36.439 [2024-12-05 13:57:07.901626] C3: Flushing cache 00:25:36.439 [2024-12-05 13:57:07.901685] C3: Flushing cache completed 00:25:36.439 [2024-12-05 13:57:07.901728] C3: Stopping cache 00:25:36.698 [2024-12-05 13:57:08.009645] C3: Done saving cache state! 00:25:36.698 [2024-12-05 13:57:08.026365] Cache C3 successfully stopped 00:25:36.698 [2024-12-05 13:57:08.028506] bdev.c:2555:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc'! 00:25:36.698 [2024-12-05 13:57:08.028565] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Malloc' was removed 00:25:36.698 [2024-12-05 13:57:08.028582] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Malloc' was removed 00:25:36.956 13:57:08 ocf.ocf_multicore -- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT 00:25:36.956 00:25:36.956 real 0m9.655s 00:25:36.956 user 0m14.341s 00:25:36.956 sys 0m2.096s 00:25:36.956 13:57:08 ocf.ocf_multicore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.956 13:57:08 ocf.ocf_multicore -- common/autotest_common.sh@10 -- # set +x 00:25:36.956 ************************************ 00:25:36.956 END TEST ocf_multicore 00:25:36.957 ************************************ 00:25:36.957 13:57:08 ocf -- ocf/ocf.sh@17 -- # run_test ocf_remove /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh 00:25:36.957 13:57:08 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:36.957 13:57:08 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.957 13:57:08 ocf -- common/autotest_common.sh@10 -- # set +x 00:25:37.216 ************************************ 00:25:37.216 START TEST ocf_remove 00:25:37.216 ************************************ 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1711 -- # lcov --version 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@344 -- # case "$op" in 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@345 -- # : 1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@365 -- # decimal 1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@353 -- # local d=1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@355 -- # echo 1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@366 -- # decimal 2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@353 -- # local d=2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@355 -- # echo 2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.216 13:57:08 ocf.ocf_remove -- scripts/common.sh@368 -- # return 0 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.216 --rc genhtml_branch_coverage=1 00:25:37.216 --rc genhtml_function_coverage=1 00:25:37.216 --rc genhtml_legend=1 00:25:37.216 --rc geninfo_all_blocks=1 00:25:37.216 --rc geninfo_unexecuted_blocks=1 00:25:37.216 00:25:37.216 ' 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.216 --rc genhtml_branch_coverage=1 00:25:37.216 --rc genhtml_function_coverage=1 00:25:37.216 --rc genhtml_legend=1 00:25:37.216 --rc geninfo_all_blocks=1 00:25:37.216 --rc geninfo_unexecuted_blocks=1 00:25:37.216 00:25:37.216 ' 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.216 --rc genhtml_branch_coverage=1 00:25:37.216 --rc genhtml_function_coverage=1 00:25:37.216 --rc genhtml_legend=1 00:25:37.216 --rc geninfo_all_blocks=1 00:25:37.216 --rc geninfo_unexecuted_blocks=1 00:25:37.216 00:25:37.216 ' 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.216 --rc genhtml_branch_coverage=1 00:25:37.216 --rc genhtml_function_coverage=1 00:25:37.216 --rc genhtml_legend=1 00:25:37.216 --rc geninfo_all_blocks=1 00:25:37.216 --rc geninfo_unexecuted_blocks=1 00:25:37.216 00:25:37.216 ' 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@12 -- # rm -f 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@13 -- # truncate -s 128M aio0 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@14 -- # truncate -s 128M aio1 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@16 -- # jq . 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@48 -- # spdk_pid=3970432 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@50 -- # waitforlisten 3970432 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@835 -- # '[' -z 3970432 ']' 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.216 13:57:08 ocf.ocf_remove -- management/remove.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.216 13:57:08 ocf.ocf_remove -- common/autotest_common.sh@10 -- # set +x 00:25:37.216 [2024-12-05 13:57:08.735354] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:37.216 [2024-12-05 13:57:08.735429] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3970432 ] 00:25:37.475 [2024-12-05 13:57:08.857512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.475 [2024-12-05 13:57:08.916684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.735 [2024-12-05 13:57:09.125082] 'OCF_Core' volume operations registered 00:25:37.735 [2024-12-05 13:57:09.125117] 'OCF_Cache' volume operations registered 00:25:37.735 [2024-12-05 13:57:09.129530] 'OCF Composite' volume operations registered 00:25:37.735 [2024-12-05 13:57:09.133980] 'SPDK_block_device' volume operations registered 00:25:37.995 13:57:09 ocf.ocf_remove -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.995 13:57:09 ocf.ocf_remove -- common/autotest_common.sh@868 -- # return 0 00:25:37.995 13:57:09 ocf.ocf_remove -- management/remove.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create ocfWT wt aio0 aio1 00:25:38.254 [2024-12-05 13:57:09.602204] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'ocfWT' is waiting for cache device 'aio0' to connect 00:25:38.254 ocfWT 00:25:38.254 13:57:09 ocf.ocf_remove -- management/remove.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:38.254 13:57:09 ocf.ocf_remove -- management/remove.sh@58 -- # jq -r '.[] .name' 00:25:38.254 13:57:09 ocf.ocf_remove -- management/remove.sh@58 -- # grep -qw ocfWT 00:25:38.515 13:57:09 ocf.ocf_remove -- management/remove.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete ocfWT 00:25:38.515 13:57:10 ocf.ocf_remove -- management/remove.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:38.515 13:57:10 ocf.ocf_remove -- management/remove.sh@66 -- # jq -r '.[] | select(.name == "ocfWT") | .name' 00:25:38.773 13:57:10 ocf.ocf_remove -- management/remove.sh@66 -- # [[ -z '' ]] 00:25:38.773 13:57:10 ocf.ocf_remove -- management/remove.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:38.773 13:57:10 ocf.ocf_remove -- management/remove.sh@70 -- # killprocess 3970432 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@954 -- # '[' -z 3970432 ']' 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@958 -- # kill -0 3970432 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@959 -- # uname 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3970432 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3970432' 00:25:38.773 killing process with pid 3970432 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@973 -- # kill 3970432 00:25:38.773 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@978 -- # wait 3970432 00:25:39.341 13:57:10 ocf.ocf_remove -- management/remove.sh@74 -- # spdk_pid=3970629 00:25:39.341 13:57:10 ocf.ocf_remove -- management/remove.sh@76 -- # trap 'killprocess $spdk_pid; rm -f aio* $curdir/config ocf_bdevs ocf_bdevs_verify; exit 1' SIGINT SIGTERM EXIT 00:25:39.341 13:57:10 ocf.ocf_remove -- management/remove.sh@73 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config 00:25:39.341 13:57:10 ocf.ocf_remove -- management/remove.sh@78 -- # waitforlisten 3970629 00:25:39.341 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@835 -- # '[' -z 3970629 ']' 00:25:39.341 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.341 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.341 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.341 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.341 13:57:10 ocf.ocf_remove -- common/autotest_common.sh@10 -- # set +x 00:25:39.341 [2024-12-05 13:57:10.786513] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:39.341 [2024-12-05 13:57:10.786571] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3970629 ] 00:25:39.599 [2024-12-05 13:57:10.893064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.599 [2024-12-05 13:57:10.949837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.857 [2024-12-05 13:57:11.135231] 'OCF_Core' volume operations registered 00:25:39.857 [2024-12-05 13:57:11.135263] 'OCF_Cache' volume operations registered 00:25:39.857 [2024-12-05 13:57:11.139353] 'OCF Composite' volume operations registered 00:25:39.857 [2024-12-05 13:57:11.143383] 'SPDK_block_device' volume operations registered 00:25:39.857 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.857 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@868 -- # return 0 00:25:39.857 13:57:11 ocf.ocf_remove -- management/remove.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:39.857 13:57:11 ocf.ocf_remove -- management/remove.sh@82 -- # jq length 00:25:40.424 13:57:11 ocf.ocf_remove -- management/remove.sh@82 -- # (( 0 == 0 )) 00:25:40.424 13:57:11 ocf.ocf_remove -- management/remove.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:40.424 13:57:11 ocf.ocf_remove -- management/remove.sh@86 -- # killprocess 3970629 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@954 -- # '[' -z 3970629 ']' 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@958 -- # kill -0 3970629 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@959 -- # uname 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3970629 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3970629' 00:25:40.424 killing process with pid 3970629 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@973 -- # kill 3970629 00:25:40.424 13:57:11 ocf.ocf_remove -- common/autotest_common.sh@978 -- # wait 3970629 00:25:40.992 13:57:12 ocf.ocf_remove -- management/remove.sh@87 -- # rm -f aio0 aio1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config ocf_bdevs ocf_bdevs_verify 00:25:40.992 00:25:40.992 real 0m3.736s 00:25:40.992 user 0m4.198s 00:25:40.992 sys 0m1.221s 00:25:40.992 13:57:12 ocf.ocf_remove -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.992 13:57:12 ocf.ocf_remove -- common/autotest_common.sh@10 -- # set +x 00:25:40.992 ************************************ 00:25:40.992 END TEST ocf_remove 00:25:40.992 ************************************ 00:25:40.992 13:57:12 ocf -- ocf/ocf.sh@18 -- # run_test ocf_configuration_change /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh 00:25:40.992 13:57:12 ocf -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:40.992 13:57:12 ocf -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.992 13:57:12 ocf -- common/autotest_common.sh@10 -- # set +x 00:25:40.992 ************************************ 00:25:40.992 START TEST ocf_configuration_change 00:25:40.992 ************************************ 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1711 -- # lcov --version 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@344 -- # case "$op" in 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@345 -- # : 1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@365 -- # decimal 1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@353 -- # local d=1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@355 -- # echo 1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@366 -- # decimal 2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@353 -- # local d=2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@355 -- # echo 2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- scripts/common.sh@368 -- # return 0 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.992 --rc genhtml_branch_coverage=1 00:25:40.992 --rc genhtml_function_coverage=1 00:25:40.992 --rc genhtml_legend=1 00:25:40.992 --rc geninfo_all_blocks=1 00:25:40.992 --rc geninfo_unexecuted_blocks=1 00:25:40.992 00:25:40.992 ' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.992 --rc genhtml_branch_coverage=1 00:25:40.992 --rc genhtml_function_coverage=1 00:25:40.992 --rc genhtml_legend=1 00:25:40.992 --rc geninfo_all_blocks=1 00:25:40.992 --rc geninfo_unexecuted_blocks=1 00:25:40.992 00:25:40.992 ' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.992 --rc genhtml_branch_coverage=1 00:25:40.992 --rc genhtml_function_coverage=1 00:25:40.992 --rc genhtml_legend=1 00:25:40.992 --rc geninfo_all_blocks=1 00:25:40.992 --rc geninfo_unexecuted_blocks=1 00:25:40.992 00:25:40.992 ' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.992 --rc genhtml_branch_coverage=1 00:25:40.992 --rc genhtml_function_coverage=1 00:25:40.992 --rc genhtml_legend=1 00:25:40.992 --rc geninfo_all_blocks=1 00:25:40.992 --rc geninfo_unexecuted_blocks=1 00:25:40.992 00:25:40.992 ' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- management/configuration-change.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- management/configuration-change.sh@11 -- # cache_line_sizes=(4 8 16 32 64) 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- management/configuration-change.sh@12 -- # cache_modes=(wt wb pt wa wi wo) 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- management/configuration-change.sh@15 -- # spdk_pid=3970884 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- management/configuration-change.sh@17 -- # waitforlisten 3970884 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@835 -- # '[' -z 3970884 ']' 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- management/configuration-change.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.992 13:57:12 ocf.ocf_configuration_change -- common/autotest_common.sh@10 -- # set +x 00:25:41.251 [2024-12-05 13:57:12.529522] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:25:41.251 [2024-12-05 13:57:12.529579] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3970884 ] 00:25:41.251 [2024-12-05 13:57:12.638233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.251 [2024-12-05 13:57:12.694924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.509 [2024-12-05 13:57:12.880265] 'OCF_Core' volume operations registered 00:25:41.509 [2024-12-05 13:57:12.880300] 'OCF_Cache' volume operations registered 00:25:41.509 [2024-12-05 13:57:12.884288] 'OCF Composite' volume operations registered 00:25:41.509 [2024-12-05 13:57:12.888380] 'SPDK_block_device' volume operations registered 00:25:41.767 13:57:13 ocf.ocf_configuration_change -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.767 13:57:13 ocf.ocf_configuration_change -- common/autotest_common.sh@868 -- # return 0 00:25:41.767 13:57:13 ocf.ocf_configuration_change -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:25:41.767 13:57:13 ocf.ocf_configuration_change -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:42.025 Malloc0 00:25:42.025 13:57:13 ocf.ocf_configuration_change -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:42.282 Malloc1 00:25:42.282 13:57:13 ocf.ocf_configuration_change -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 4 00:25:42.282 [2024-12-05 13:57:13.799198] Inserting cache Cache0 00:25:42.282 [2024-12-05 13:57:13.799681] Cache0: Metadata initialized 00:25:42.282 [2024-12-05 13:57:13.800125] Cache0: Successfully added 00:25:42.282 [2024-12-05 13:57:13.800140] Cache0: Cache mode : wt 00:25:42.541 [2024-12-05 13:57:13.810996] Cache0: Super block config offset : 0 kiB 00:25:42.541 [2024-12-05 13:57:13.811019] Cache0: Super block config size : 2200 B 00:25:42.541 [2024-12-05 13:57:13.811026] Cache0: Super block runtime offset : 128 kiB 00:25:42.541 [2024-12-05 13:57:13.811032] Cache0: Super block runtime size : 4 B 00:25:42.541 [2024-12-05 13:57:13.811039] Cache0: Reserved offset : 256 kiB 00:25:42.541 [2024-12-05 13:57:13.811046] Cache0: Reserved size : 128 kiB 00:25:42.541 [2024-12-05 13:57:13.811052] Cache0: Part config offset : 384 kiB 00:25:42.541 [2024-12-05 13:57:13.811058] Cache0: Part config size : 48 kiB 00:25:42.541 [2024-12-05 13:57:13.811065] Cache0: Part runtime offset : 640 kiB 00:25:42.541 [2024-12-05 13:57:13.811071] Cache0: Part runtime size : 72 kiB 00:25:42.541 [2024-12-05 13:57:13.811078] Cache0: Core config offset : 768 kiB 00:25:42.541 [2024-12-05 13:57:13.811084] Cache0: Core config size : 512 kiB 00:25:42.541 [2024-12-05 13:57:13.811091] Cache0: Core runtime offset : 1792 kiB 00:25:42.541 [2024-12-05 13:57:13.811097] Cache0: Core runtime size : 1172 kiB 00:25:42.541 [2024-12-05 13:57:13.811103] Cache0: Core UUID offset : 3072 kiB 00:25:42.541 [2024-12-05 13:57:13.811110] Cache0: Core UUID size : 16384 kiB 00:25:42.541 [2024-12-05 13:57:13.811116] Cache0: Cleaning offset : 35840 kiB 00:25:42.541 [2024-12-05 13:57:13.811123] Cache0: Cleaning size : 196 kiB 00:25:42.541 [2024-12-05 13:57:13.811129] Cache0: LRU list offset : 36096 kiB 00:25:42.541 [2024-12-05 13:57:13.811135] Cache0: LRU list size : 148 kiB 00:25:42.541 [2024-12-05 13:57:13.811142] Cache0: Collision offset : 36352 kiB 00:25:42.541 [2024-12-05 13:57:13.811148] Cache0: Collision size : 196 kiB 00:25:42.541 [2024-12-05 13:57:13.811154] Cache0: List info offset : 36608 kiB 00:25:42.541 [2024-12-05 13:57:13.811161] Cache0: List info size : 148 kiB 00:25:42.541 [2024-12-05 13:57:13.811167] Cache0: Hash offset : 36864 kiB 00:25:42.541 [2024-12-05 13:57:13.811174] Cache0: Hash size : 20 kiB 00:25:42.541 [2024-12-05 13:57:13.811180] Cache0: Cache line size: 4 kiB 00:25:42.541 [2024-12-05 13:57:13.811187] Cache0: Metadata size on device: 36992 kiB 00:25:42.541 [2024-12-05 13:57:13.821658] Cache0: Policy 'always' initialized successfully 00:25:42.541 [2024-12-05 13:57:13.936304] Cache0: Done saving cache state! 00:25:42.541 [2024-12-05 13:57:13.967915] Cache0: Cache attached 00:25:42.541 [2024-12-05 13:57:13.968010] Cache0: Successfully attached 00:25:42.541 [2024-12-05 13:57:13.968291] Cache0: Inserting core Malloc1 00:25:42.541 [2024-12-05 13:57:13.968313] Cache0.Malloc1: Seqential cutoff init 00:25:42.541 [2024-12-05 13:57:13.999789] Cache0.Malloc1: Successfully added 00:25:42.541 Cache0 00:25:42.541 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:42.541 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:42.799 true 00:25:42.799 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:42.799 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 4' 00:25:43.058 true 00:25:43.058 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:43.058 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 4' 00:25:43.317 true 00:25:43.317 13:57:14 ocf.ocf_configuration_change -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:25:43.576 [2024-12-05 13:57:14.892465] Cache0: Flushing cache 00:25:43.576 [2024-12-05 13:57:14.892499] Cache0: Flushing cache completed 00:25:43.576 [2024-12-05 13:57:14.893502] Cache0.Malloc1: Removing core 00:25:43.576 [2024-12-05 13:57:14.925456] Cache0: Core Malloc1 successfully removed 00:25:43.576 [2024-12-05 13:57:14.925511] Cache0: Stopping cache 00:25:43.576 [2024-12-05 13:57:15.031646] Cache0: Done saving cache state! 00:25:43.576 [2024-12-05 13:57:15.046739] Cache Cache0 successfully stopped 00:25:43.576 13:57:15 ocf.ocf_configuration_change -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:43.834 13:57:15 ocf.ocf_configuration_change -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:44.093 13:57:15 ocf.ocf_configuration_change -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:25:44.093 13:57:15 ocf.ocf_configuration_change -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:44.351 Malloc0 00:25:44.351 13:57:15 ocf.ocf_configuration_change -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:44.610 Malloc1 00:25:44.869 13:57:16 ocf.ocf_configuration_change -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 8 00:25:44.869 [2024-12-05 13:57:16.378214] Inserting cache Cache0 00:25:44.869 [2024-12-05 13:57:16.378637] Cache0: Metadata initialized 00:25:44.869 [2024-12-05 13:57:16.379068] Cache0: Successfully added 00:25:44.869 [2024-12-05 13:57:16.379076] Cache0: Cache mode : wt 00:25:44.869 [2024-12-05 13:57:16.388937] Cache0: Super block config offset : 0 kiB 00:25:44.869 [2024-12-05 13:57:16.388958] Cache0: Super block config size : 2200 B 00:25:44.869 [2024-12-05 13:57:16.388965] Cache0: Super block runtime offset : 128 kiB 00:25:44.869 [2024-12-05 13:57:16.388971] Cache0: Super block runtime size : 4 B 00:25:44.869 [2024-12-05 13:57:16.388978] Cache0: Reserved offset : 256 kiB 00:25:44.869 [2024-12-05 13:57:16.388985] Cache0: Reserved size : 128 kiB 00:25:44.869 [2024-12-05 13:57:16.388991] Cache0: Part config offset : 384 kiB 00:25:44.869 [2024-12-05 13:57:16.388998] Cache0: Part config size : 48 kiB 00:25:44.869 [2024-12-05 13:57:16.389004] Cache0: Part runtime offset : 640 kiB 00:25:44.869 [2024-12-05 13:57:16.389011] Cache0: Part runtime size : 72 kiB 00:25:44.869 [2024-12-05 13:57:16.389017] Cache0: Core config offset : 768 kiB 00:25:44.869 [2024-12-05 13:57:16.389023] Cache0: Core config size : 512 kiB 00:25:44.869 [2024-12-05 13:57:16.389030] Cache0: Core runtime offset : 1792 kiB 00:25:44.869 [2024-12-05 13:57:16.389036] Cache0: Core runtime size : 1172 kiB 00:25:44.869 [2024-12-05 13:57:16.389043] Cache0: Core UUID offset : 3072 kiB 00:25:44.869 [2024-12-05 13:57:16.389049] Cache0: Core UUID size : 16384 kiB 00:25:44.869 [2024-12-05 13:57:16.389056] Cache0: Cleaning offset : 35840 kiB 00:25:44.869 [2024-12-05 13:57:16.389062] Cache0: Cleaning size : 100 kiB 00:25:44.869 [2024-12-05 13:57:16.389069] Cache0: LRU list offset : 35968 kiB 00:25:44.869 [2024-12-05 13:57:16.389075] Cache0: LRU list size : 76 kiB 00:25:44.869 [2024-12-05 13:57:16.389081] Cache0: Collision offset : 36096 kiB 00:25:44.869 [2024-12-05 13:57:16.389088] Cache0: Collision size : 116 kiB 00:25:44.869 [2024-12-05 13:57:16.389094] Cache0: List info offset : 36224 kiB 00:25:44.869 [2024-12-05 13:57:16.389101] Cache0: List info size : 76 kiB 00:25:44.869 [2024-12-05 13:57:16.389107] Cache0: Hash offset : 36352 kiB 00:25:44.869 [2024-12-05 13:57:16.389113] Cache0: Hash size : 12 kiB 00:25:44.869 [2024-12-05 13:57:16.389120] Cache0: Cache line size: 8 kiB 00:25:44.869 [2024-12-05 13:57:16.389127] Cache0: Metadata size on device: 36480 kiB 00:25:45.127 [2024-12-05 13:57:16.398709] Cache0: Policy 'always' initialized successfully 00:25:45.127 [2024-12-05 13:57:16.496719] Cache0: Done saving cache state! 00:25:45.127 [2024-12-05 13:57:16.527358] Cache0: Cache attached 00:25:45.127 [2024-12-05 13:57:16.527454] Cache0: Successfully attached 00:25:45.127 [2024-12-05 13:57:16.527742] Cache0: Inserting core Malloc1 00:25:45.127 [2024-12-05 13:57:16.527764] Cache0.Malloc1: Seqential cutoff init 00:25:45.127 [2024-12-05 13:57:16.558688] Cache0.Malloc1: Successfully added 00:25:45.127 Cache0 00:25:45.127 13:57:16 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:45.127 13:57:16 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:45.386 true 00:25:45.386 13:57:16 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:45.386 13:57:16 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 8' 00:25:45.645 true 00:25:45.645 13:57:17 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:45.645 13:57:17 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 8' 00:25:45.903 true 00:25:45.903 13:57:17 ocf.ocf_configuration_change -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:25:46.162 [2024-12-05 13:57:17.463469] Cache0: Flushing cache 00:25:46.162 [2024-12-05 13:57:17.463504] Cache0: Flushing cache completed 00:25:46.162 [2024-12-05 13:57:17.464153] Cache0.Malloc1: Removing core 00:25:46.162 [2024-12-05 13:57:17.497032] Cache0: Core Malloc1 successfully removed 00:25:46.162 [2024-12-05 13:57:17.497094] Cache0: Stopping cache 00:25:46.162 [2024-12-05 13:57:17.591397] Cache0: Done saving cache state! 00:25:46.162 [2024-12-05 13:57:17.607227] Cache Cache0 successfully stopped 00:25:46.162 13:57:17 ocf.ocf_configuration_change -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:46.422 13:57:17 ocf.ocf_configuration_change -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:46.681 13:57:18 ocf.ocf_configuration_change -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:25:46.681 13:57:18 ocf.ocf_configuration_change -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:46.940 Malloc0 00:25:46.940 13:57:18 ocf.ocf_configuration_change -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:47.200 Malloc1 00:25:47.200 13:57:18 ocf.ocf_configuration_change -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 16 00:25:47.460 [2024-12-05 13:57:18.949672] Inserting cache Cache0 00:25:47.460 [2024-12-05 13:57:18.950144] Cache0: Metadata initialized 00:25:47.460 [2024-12-05 13:57:18.950577] Cache0: Successfully added 00:25:47.460 [2024-12-05 13:57:18.950585] Cache0: Cache mode : wt 00:25:47.460 [2024-12-05 13:57:18.961292] Cache0: Super block config offset : 0 kiB 00:25:47.460 [2024-12-05 13:57:18.961316] Cache0: Super block config size : 2200 B 00:25:47.460 [2024-12-05 13:57:18.961323] Cache0: Super block runtime offset : 128 kiB 00:25:47.460 [2024-12-05 13:57:18.961330] Cache0: Super block runtime size : 4 B 00:25:47.460 [2024-12-05 13:57:18.961336] Cache0: Reserved offset : 256 kiB 00:25:47.460 [2024-12-05 13:57:18.961343] Cache0: Reserved size : 128 kiB 00:25:47.460 [2024-12-05 13:57:18.961349] Cache0: Part config offset : 384 kiB 00:25:47.460 [2024-12-05 13:57:18.961356] Cache0: Part config size : 48 kiB 00:25:47.460 [2024-12-05 13:57:18.961362] Cache0: Part runtime offset : 640 kiB 00:25:47.460 [2024-12-05 13:57:18.961369] Cache0: Part runtime size : 72 kiB 00:25:47.460 [2024-12-05 13:57:18.961375] Cache0: Core config offset : 768 kiB 00:25:47.460 [2024-12-05 13:57:18.961381] Cache0: Core config size : 512 kiB 00:25:47.460 [2024-12-05 13:57:18.961397] Cache0: Core runtime offset : 1792 kiB 00:25:47.460 [2024-12-05 13:57:18.961403] Cache0: Core runtime size : 1172 kiB 00:25:47.460 [2024-12-05 13:57:18.961410] Cache0: Core UUID offset : 3072 kiB 00:25:47.460 [2024-12-05 13:57:18.961416] Cache0: Core UUID size : 16384 kiB 00:25:47.460 [2024-12-05 13:57:18.961423] Cache0: Cleaning offset : 35840 kiB 00:25:47.460 [2024-12-05 13:57:18.961429] Cache0: Cleaning size : 52 kiB 00:25:47.460 [2024-12-05 13:57:18.961435] Cache0: LRU list offset : 35968 kiB 00:25:47.460 [2024-12-05 13:57:18.961442] Cache0: LRU list size : 40 kiB 00:25:47.460 [2024-12-05 13:57:18.961448] Cache0: Collision offset : 36096 kiB 00:25:47.460 [2024-12-05 13:57:18.961454] Cache0: Collision size : 76 kiB 00:25:47.460 [2024-12-05 13:57:18.961461] Cache0: List info offset : 36224 kiB 00:25:47.460 [2024-12-05 13:57:18.961467] Cache0: List info size : 40 kiB 00:25:47.460 [2024-12-05 13:57:18.961474] Cache0: Hash offset : 36352 kiB 00:25:47.460 [2024-12-05 13:57:18.961480] Cache0: Hash size : 8 kiB 00:25:47.460 [2024-12-05 13:57:18.961487] Cache0: Cache line size: 16 kiB 00:25:47.460 [2024-12-05 13:57:18.961494] Cache0: Metadata size on device: 36480 kiB 00:25:47.460 [2024-12-05 13:57:18.971862] Cache0: Policy 'always' initialized successfully 00:25:47.720 [2024-12-05 13:57:19.063250] Cache0: Done saving cache state! 00:25:47.720 [2024-12-05 13:57:19.094667] Cache0: Cache attached 00:25:47.720 [2024-12-05 13:57:19.094763] Cache0: Successfully attached 00:25:47.720 [2024-12-05 13:57:19.095052] Cache0: Inserting core Malloc1 00:25:47.720 [2024-12-05 13:57:19.095073] Cache0.Malloc1: Seqential cutoff init 00:25:47.720 [2024-12-05 13:57:19.126270] Cache0.Malloc1: Successfully added 00:25:47.720 Cache0 00:25:47.720 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:47.720 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:47.980 true 00:25:47.980 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:47.980 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 16' 00:25:48.239 true 00:25:48.239 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 16' 00:25:48.239 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:48.499 true 00:25:48.499 13:57:19 ocf.ocf_configuration_change -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:25:48.758 [2024-12-05 13:57:20.035024] Cache0: Flushing cache 00:25:48.758 [2024-12-05 13:57:20.035059] Cache0: Flushing cache completed 00:25:48.758 [2024-12-05 13:57:20.035533] Cache0.Malloc1: Removing core 00:25:48.758 [2024-12-05 13:57:20.067856] Cache0: Core Malloc1 successfully removed 00:25:48.758 [2024-12-05 13:57:20.067915] Cache0: Stopping cache 00:25:48.758 [2024-12-05 13:57:20.155901] Cache0: Done saving cache state! 00:25:48.758 [2024-12-05 13:57:20.170514] Cache Cache0 successfully stopped 00:25:48.758 13:57:20 ocf.ocf_configuration_change -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:49.017 13:57:20 ocf.ocf_configuration_change -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:49.275 13:57:20 ocf.ocf_configuration_change -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:25:49.275 13:57:20 ocf.ocf_configuration_change -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:49.534 Malloc0 00:25:49.534 13:57:20 ocf.ocf_configuration_change -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:49.534 Malloc1 00:25:49.534 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 32 00:25:49.792 [2024-12-05 13:57:21.273793] Inserting cache Cache0 00:25:49.792 [2024-12-05 13:57:21.274264] Cache0: Metadata initialized 00:25:49.792 [2024-12-05 13:57:21.274702] Cache0: Successfully added 00:25:49.792 [2024-12-05 13:57:21.274711] Cache0: Cache mode : wt 00:25:49.792 [2024-12-05 13:57:21.285417] Cache0: Super block config offset : 0 kiB 00:25:49.792 [2024-12-05 13:57:21.285441] Cache0: Super block config size : 2200 B 00:25:49.792 [2024-12-05 13:57:21.285448] Cache0: Super block runtime offset : 128 kiB 00:25:49.792 [2024-12-05 13:57:21.285455] Cache0: Super block runtime size : 4 B 00:25:49.792 [2024-12-05 13:57:21.285461] Cache0: Reserved offset : 256 kiB 00:25:49.792 [2024-12-05 13:57:21.285468] Cache0: Reserved size : 128 kiB 00:25:49.792 [2024-12-05 13:57:21.285474] Cache0: Part config offset : 384 kiB 00:25:49.792 [2024-12-05 13:57:21.285481] Cache0: Part config size : 48 kiB 00:25:49.792 [2024-12-05 13:57:21.285488] Cache0: Part runtime offset : 640 kiB 00:25:49.792 [2024-12-05 13:57:21.285494] Cache0: Part runtime size : 72 kiB 00:25:49.792 [2024-12-05 13:57:21.285501] Cache0: Core config offset : 768 kiB 00:25:49.792 [2024-12-05 13:57:21.285507] Cache0: Core config size : 512 kiB 00:25:49.792 [2024-12-05 13:57:21.285514] Cache0: Core runtime offset : 1792 kiB 00:25:49.792 [2024-12-05 13:57:21.285520] Cache0: Core runtime size : 1172 kiB 00:25:49.792 [2024-12-05 13:57:21.285527] Cache0: Core UUID offset : 3072 kiB 00:25:49.792 [2024-12-05 13:57:21.285533] Cache0: Core UUID size : 16384 kiB 00:25:49.792 [2024-12-05 13:57:21.285540] Cache0: Cleaning offset : 35840 kiB 00:25:49.792 [2024-12-05 13:57:21.285546] Cache0: Cleaning size : 28 kiB 00:25:49.792 [2024-12-05 13:57:21.285553] Cache0: LRU list offset : 35968 kiB 00:25:49.792 [2024-12-05 13:57:21.285559] Cache0: LRU list size : 20 kiB 00:25:49.792 [2024-12-05 13:57:21.285566] Cache0: Collision offset : 36096 kiB 00:25:49.792 [2024-12-05 13:57:21.285572] Cache0: Collision size : 56 kiB 00:25:49.792 [2024-12-05 13:57:21.285579] Cache0: List info offset : 36224 kiB 00:25:49.792 [2024-12-05 13:57:21.285585] Cache0: List info size : 20 kiB 00:25:49.792 [2024-12-05 13:57:21.285592] Cache0: Hash offset : 36352 kiB 00:25:49.792 [2024-12-05 13:57:21.285598] Cache0: Hash size : 4 kiB 00:25:49.792 [2024-12-05 13:57:21.285605] Cache0: Cache line size: 32 kiB 00:25:49.792 [2024-12-05 13:57:21.285612] Cache0: Metadata size on device: 36480 kiB 00:25:49.792 [2024-12-05 13:57:21.295950] Cache0: Policy 'always' initialized successfully 00:25:50.051 [2024-12-05 13:57:21.383719] Cache0: Done saving cache state! 00:25:50.051 [2024-12-05 13:57:21.415092] Cache0: Cache attached 00:25:50.051 [2024-12-05 13:57:21.415186] Cache0: Successfully attached 00:25:50.051 [2024-12-05 13:57:21.415455] Cache0: Inserting core Malloc1 00:25:50.051 [2024-12-05 13:57:21.415477] Cache0.Malloc1: Seqential cutoff init 00:25:50.051 [2024-12-05 13:57:21.446575] Cache0.Malloc1: Successfully added 00:25:50.051 Cache0 00:25:50.051 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:50.051 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:50.309 true 00:25:50.309 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:50.309 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 32' 00:25:50.567 true 00:25:50.567 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:50.567 13:57:21 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 32' 00:25:50.826 true 00:25:50.826 13:57:22 ocf.ocf_configuration_change -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:25:51.083 [2024-12-05 13:57:22.351374] Cache0: Flushing cache 00:25:51.083 [2024-12-05 13:57:22.351409] Cache0: Flushing cache completed 00:25:51.083 [2024-12-05 13:57:22.351791] Cache0.Malloc1: Removing core 00:25:51.083 [2024-12-05 13:57:22.383698] Cache0: Core Malloc1 successfully removed 00:25:51.083 [2024-12-05 13:57:22.383755] Cache0: Stopping cache 00:25:51.083 [2024-12-05 13:57:22.467663] Cache0: Done saving cache state! 00:25:51.083 [2024-12-05 13:57:22.482824] Cache Cache0 successfully stopped 00:25:51.083 13:57:22 ocf.ocf_configuration_change -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:51.341 13:57:22 ocf.ocf_configuration_change -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:51.599 13:57:22 ocf.ocf_configuration_change -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:25:51.599 13:57:22 ocf.ocf_configuration_change -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:51.857 Malloc0 00:25:51.857 13:57:23 ocf.ocf_configuration_change -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:52.115 Malloc1 00:25:52.115 13:57:23 ocf.ocf_configuration_change -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 64 00:25:52.374 [2024-12-05 13:57:23.787829] Inserting cache Cache0 00:25:52.374 [2024-12-05 13:57:23.788248] Cache0: Metadata initialized 00:25:52.374 [2024-12-05 13:57:23.788687] Cache0: Successfully added 00:25:52.374 [2024-12-05 13:57:23.788695] Cache0: Cache mode : wt 00:25:52.374 [2024-12-05 13:57:23.798540] Cache0: Super block config offset : 0 kiB 00:25:52.374 [2024-12-05 13:57:23.798563] Cache0: Super block config size : 2200 B 00:25:52.374 [2024-12-05 13:57:23.798571] Cache0: Super block runtime offset : 128 kiB 00:25:52.374 [2024-12-05 13:57:23.798577] Cache0: Super block runtime size : 4 B 00:25:52.374 [2024-12-05 13:57:23.798584] Cache0: Reserved offset : 256 kiB 00:25:52.374 [2024-12-05 13:57:23.798591] Cache0: Reserved size : 128 kiB 00:25:52.374 [2024-12-05 13:57:23.798597] Cache0: Part config offset : 384 kiB 00:25:52.374 [2024-12-05 13:57:23.798603] Cache0: Part config size : 48 kiB 00:25:52.374 [2024-12-05 13:57:23.798610] Cache0: Part runtime offset : 640 kiB 00:25:52.374 [2024-12-05 13:57:23.798616] Cache0: Part runtime size : 72 kiB 00:25:52.374 [2024-12-05 13:57:23.798622] Cache0: Core config offset : 768 kiB 00:25:52.374 [2024-12-05 13:57:23.798629] Cache0: Core config size : 512 kiB 00:25:52.374 [2024-12-05 13:57:23.798641] Cache0: Core runtime offset : 1792 kiB 00:25:52.374 [2024-12-05 13:57:23.798647] Cache0: Core runtime size : 1172 kiB 00:25:52.374 [2024-12-05 13:57:23.798654] Cache0: Core UUID offset : 3072 kiB 00:25:52.374 [2024-12-05 13:57:23.798660] Cache0: Core UUID size : 16384 kiB 00:25:52.374 [2024-12-05 13:57:23.798666] Cache0: Cleaning offset : 35840 kiB 00:25:52.374 [2024-12-05 13:57:23.798673] Cache0: Cleaning size : 16 kiB 00:25:52.374 [2024-12-05 13:57:23.798679] Cache0: LRU list offset : 35968 kiB 00:25:52.374 [2024-12-05 13:57:23.798685] Cache0: LRU list size : 12 kiB 00:25:52.374 [2024-12-05 13:57:23.798692] Cache0: Collision offset : 36096 kiB 00:25:52.374 [2024-12-05 13:57:23.798698] Cache0: Collision size : 44 kiB 00:25:52.374 [2024-12-05 13:57:23.798704] Cache0: List info offset : 36224 kiB 00:25:52.374 [2024-12-05 13:57:23.798711] Cache0: List info size : 12 kiB 00:25:52.374 [2024-12-05 13:57:23.798717] Cache0: Hash offset : 36352 kiB 00:25:52.374 [2024-12-05 13:57:23.798724] Cache0: Hash size : 4 kiB 00:25:52.374 [2024-12-05 13:57:23.798730] Cache0: Cache line size: 64 kiB 00:25:52.374 [2024-12-05 13:57:23.798737] Cache0: Metadata size on device: 36480 kiB 00:25:52.374 [2024-12-05 13:57:23.808276] Cache0: Policy 'always' initialized successfully 00:25:52.374 [2024-12-05 13:57:23.892862] Cache0: Done saving cache state! 00:25:52.631 [2024-12-05 13:57:23.924113] Cache0: Cache attached 00:25:52.631 [2024-12-05 13:57:23.924207] Cache0: Successfully attached 00:25:52.631 [2024-12-05 13:57:23.924494] Cache0: Inserting core Malloc1 00:25:52.631 [2024-12-05 13:57:23.924514] Cache0.Malloc1: Seqential cutoff init 00:25:52.631 [2024-12-05 13:57:23.955081] Cache0.Malloc1: Successfully added 00:25:52.631 Cache0 00:25:52.631 13:57:23 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:52.631 13:57:23 ocf.ocf_configuration_change -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:52.889 true 00:25:52.889 13:57:24 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:52.889 13:57:24 ocf.ocf_configuration_change -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 64' 00:25:53.147 true 00:25:53.147 13:57:24 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:53.147 13:57:24 ocf.ocf_configuration_change -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 64' 00:25:53.405 true 00:25:53.405 13:57:24 ocf.ocf_configuration_change -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:25:53.665 [2024-12-05 13:57:24.928176] Cache0: Flushing cache 00:25:53.665 [2024-12-05 13:57:24.928213] Cache0: Flushing cache completed 00:25:53.665 [2024-12-05 13:57:24.928581] Cache0.Malloc1: Removing core 00:25:53.665 [2024-12-05 13:57:24.961658] Cache0: Core Malloc1 successfully removed 00:25:53.665 [2024-12-05 13:57:24.961719] Cache0: Stopping cache 00:25:53.665 [2024-12-05 13:57:25.044976] Cache0: Done saving cache state! 00:25:53.665 [2024-12-05 13:57:25.061067] Cache Cache0 successfully stopped 00:25:53.665 13:57:25 ocf.ocf_configuration_change -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:53.923 13:57:25 ocf.ocf_configuration_change -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:54.181 13:57:25 ocf.ocf_configuration_change -- management/configuration-change.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:25:54.439 Malloc0 00:25:54.439 13:57:25 ocf.ocf_configuration_change -- management/configuration-change.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:25:54.698 Malloc1 00:25:54.698 13:57:26 ocf.ocf_configuration_change -- management/configuration-change.sh@42 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 00:25:54.956 [2024-12-05 13:57:26.243000] Inserting cache Cache0 00:25:54.957 [2024-12-05 13:57:26.243430] Cache0: Metadata initialized 00:25:54.957 [2024-12-05 13:57:26.243871] Cache0: Successfully added 00:25:54.957 [2024-12-05 13:57:26.243880] Cache0: Cache mode : wt 00:25:54.957 [2024-12-05 13:57:26.253971] Cache0: Super block config offset : 0 kiB 00:25:54.957 [2024-12-05 13:57:26.253993] Cache0: Super block config size : 2200 B 00:25:54.957 [2024-12-05 13:57:26.254000] Cache0: Super block runtime offset : 128 kiB 00:25:54.957 [2024-12-05 13:57:26.254007] Cache0: Super block runtime size : 4 B 00:25:54.957 [2024-12-05 13:57:26.254014] Cache0: Reserved offset : 256 kiB 00:25:54.957 [2024-12-05 13:57:26.254020] Cache0: Reserved size : 128 kiB 00:25:54.957 [2024-12-05 13:57:26.254027] Cache0: Part config offset : 384 kiB 00:25:54.957 [2024-12-05 13:57:26.254033] Cache0: Part config size : 48 kiB 00:25:54.957 [2024-12-05 13:57:26.254039] Cache0: Part runtime offset : 640 kiB 00:25:54.957 [2024-12-05 13:57:26.254046] Cache0: Part runtime size : 72 kiB 00:25:54.957 [2024-12-05 13:57:26.254052] Cache0: Core config offset : 768 kiB 00:25:54.957 [2024-12-05 13:57:26.254059] Cache0: Core config size : 512 kiB 00:25:54.957 [2024-12-05 13:57:26.254065] Cache0: Core runtime offset : 1792 kiB 00:25:54.957 [2024-12-05 13:57:26.254071] Cache0: Core runtime size : 1172 kiB 00:25:54.957 [2024-12-05 13:57:26.254078] Cache0: Core UUID offset : 3072 kiB 00:25:54.957 [2024-12-05 13:57:26.254084] Cache0: Core UUID size : 16384 kiB 00:25:54.957 [2024-12-05 13:57:26.254091] Cache0: Cleaning offset : 35840 kiB 00:25:54.957 [2024-12-05 13:57:26.254097] Cache0: Cleaning size : 196 kiB 00:25:54.957 [2024-12-05 13:57:26.254103] Cache0: LRU list offset : 36096 kiB 00:25:54.957 [2024-12-05 13:57:26.254110] Cache0: LRU list size : 148 kiB 00:25:54.957 [2024-12-05 13:57:26.254116] Cache0: Collision offset : 36352 kiB 00:25:54.957 [2024-12-05 13:57:26.254123] Cache0: Collision size : 196 kiB 00:25:54.957 [2024-12-05 13:57:26.254129] Cache0: List info offset : 36608 kiB 00:25:54.957 [2024-12-05 13:57:26.254135] Cache0: List info size : 148 kiB 00:25:54.957 [2024-12-05 13:57:26.254142] Cache0: Hash offset : 36864 kiB 00:25:54.957 [2024-12-05 13:57:26.254148] Cache0: Hash size : 20 kiB 00:25:54.957 [2024-12-05 13:57:26.254155] Cache0: Cache line size: 4 kiB 00:25:54.957 [2024-12-05 13:57:26.254162] Cache0: Metadata size on device: 36992 kiB 00:25:54.957 [2024-12-05 13:57:26.264052] Cache0: Policy 'always' initialized successfully 00:25:54.957 [2024-12-05 13:57:26.378790] Cache0: Done saving cache state! 00:25:54.957 [2024-12-05 13:57:26.410279] Cache0: Cache attached 00:25:54.957 [2024-12-05 13:57:26.410374] Cache0: Successfully attached 00:25:54.957 [2024-12-05 13:57:26.410663] Cache0: Inserting core Malloc1 00:25:54.957 [2024-12-05 13:57:26.410684] Cache0.Malloc1: Seqential cutoff init 00:25:54.957 [2024-12-05 13:57:26.441962] Cache0.Malloc1: Successfully added 00:25:54.957 Cache0 00:25:54.957 13:57:26 ocf.ocf_configuration_change -- management/configuration-change.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:25:54.957 13:57:26 ocf.ocf_configuration_change -- management/configuration-change.sh@44 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:25:55.215 true 00:25:55.215 13:57:26 ocf.ocf_configuration_change -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:25:55.215 13:57:26 ocf.ocf_configuration_change -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wt 00:25:55.474 [2024-12-05 13:57:26.989538] Cache0: Cache mode 'Write Through' is already set 00:25:55.474 wt 00:25:55.732 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:55.732 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wt"' 00:25:55.732 true 00:25:55.732 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:55.732 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wt"' 00:25:55.991 true 00:25:55.991 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:25:55.991 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wb 00:25:56.249 [2024-12-05 13:57:27.715617] Cache0: Changing cache mode from 'Write Through' to 'Write Back' successful 00:25:56.249 wb 00:25:56.249 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:56.249 13:57:27 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wb"' 00:25:56.508 true 00:25:56.508 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:56.508 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wb"' 00:25:56.767 true 00:25:56.767 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:25:56.767 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 pt 00:25:57.026 [2024-12-05 13:57:28.542049] Cache0: Changing cache mode from 'Write Back' to 'Pass Through' successful 00:25:57.026 pt 00:25:57.285 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:57.285 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "pt"' 00:25:57.543 true 00:25:57.543 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:57.543 13:57:28 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "pt"' 00:25:57.802 true 00:25:57.802 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:25:57.802 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wa 00:25:58.061 [2024-12-05 13:57:29.352257] Cache0: Changing cache mode from 'Pass Through' to 'Write Around' successful 00:25:58.061 wa 00:25:58.061 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:58.061 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wa"' 00:25:58.319 true 00:25:58.319 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wa"' 00:25:58.319 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:58.578 true 00:25:58.578 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:25:58.578 13:57:29 ocf.ocf_configuration_change -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wi 00:25:58.835 [2024-12-05 13:57:30.166575] Cache0: Changing cache mode from 'Write Around' to 'Write Invalidate' successful 00:25:58.835 wi 00:25:58.835 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:58.835 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wi"' 00:25:59.094 true 00:25:59.094 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:59.094 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wi"' 00:25:59.352 true 00:25:59.352 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:25:59.352 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wo 00:25:59.612 [2024-12-05 13:57:30.980899] Cache0: Changing cache mode from 'Write Invalidate' to 'Write Only' successful 00:25:59.612 wo 00:25:59.612 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:25:59.612 13:57:30 ocf.ocf_configuration_change -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wo"' 00:25:59.870 true 00:25:59.870 13:57:31 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:59.870 13:57:31 ocf.ocf_configuration_change -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wo"' 00:26:00.128 true 00:26:00.128 13:57:31 ocf.ocf_configuration_change -- management/configuration-change.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p always -t 64 00:26:00.386 [2024-12-05 13:57:31.783195] Cache0.Malloc1: Changing sequential cutoff policy from full to always 00:26:00.386 [2024-12-05 13:57:31.783265] Cache0.Malloc1: Changing sequential cutoff threshold from 1024 to 65536 bytes successful 00:26:00.386 13:57:31 ocf.ocf_configuration_change -- management/configuration-change.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p never -t 16 00:26:00.643 [2024-12-05 13:57:32.051934] Cache0.Malloc1: Changing sequential cutoff policy from always to never 00:26:00.643 [2024-12-05 13:57:32.051996] Cache0.Malloc1: Changing sequential cutoff threshold from 65536 to 16384 bytes successful 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- management/configuration-change.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- management/configuration-change.sh@63 -- # killprocess 3970884 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@954 -- # '[' -z 3970884 ']' 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@958 -- # kill -0 3970884 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@959 -- # uname 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3970884 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3970884' 00:26:00.643 killing process with pid 3970884 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@973 -- # kill 3970884 00:26:00.643 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@978 -- # wait 3970884 00:26:00.901 [2024-12-05 13:57:32.291964] Cache0: Flushing cache 00:26:00.901 [2024-12-05 13:57:32.292013] Cache0: Flushing cache completed 00:26:00.901 [2024-12-05 13:57:32.292065] Cache0: Stopping cache 00:26:00.901 [2024-12-05 13:57:32.400152] Cache0: Done saving cache state! 00:26:00.901 [2024-12-05 13:57:32.416030] Cache Cache0 successfully stopped 00:26:01.468 00:26:01.468 real 0m20.492s 00:26:01.468 user 0m34.903s 00:26:01.468 sys 0m3.336s 00:26:01.468 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.468 13:57:32 ocf.ocf_configuration_change -- common/autotest_common.sh@10 -- # set +x 00:26:01.468 ************************************ 00:26:01.468 END TEST ocf_configuration_change 00:26:01.468 ************************************ 00:26:01.468 00:26:01.468 real 1m45.828s 00:26:01.468 user 2m42.130s 00:26:01.468 sys 0m19.670s 00:26:01.468 13:57:32 ocf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.468 13:57:32 ocf -- common/autotest_common.sh@10 -- # set +x 00:26:01.468 ************************************ 00:26:01.468 END TEST ocf 00:26:01.468 ************************************ 00:26:01.468 13:57:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:01.468 13:57:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:01.468 13:57:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:01.468 13:57:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:01.468 13:57:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:01.468 13:57:32 -- spdk/autotest.sh@366 -- # [[ 1 -eq 1 ]] 00:26:01.468 13:57:32 -- spdk/autotest.sh@367 -- # run_test scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh 00:26:01.468 13:57:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:01.468 13:57:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.468 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:26:01.468 ************************************ 00:26:01.468 START TEST scheduler 00:26:01.468 ************************************ 00:26:01.468 13:57:32 scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh 00:26:01.468 * Looking for test storage... 00:26:01.468 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:26:01.468 13:57:32 scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:01.727 13:57:32 scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:26:01.727 13:57:32 scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@344 -- # case "$op" in 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@345 -- # : 1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@365 -- # decimal 1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@353 -- # local d=1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@355 -- # echo 1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@366 -- # decimal 2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@353 -- # local d=2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@355 -- # echo 2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.727 13:57:33 scheduler -- scripts/common.sh@368 -- # return 0 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:01.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.727 --rc genhtml_branch_coverage=1 00:26:01.727 --rc genhtml_function_coverage=1 00:26:01.727 --rc genhtml_legend=1 00:26:01.727 --rc geninfo_all_blocks=1 00:26:01.727 --rc geninfo_unexecuted_blocks=1 00:26:01.727 00:26:01.727 ' 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:01.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.727 --rc genhtml_branch_coverage=1 00:26:01.727 --rc genhtml_function_coverage=1 00:26:01.727 --rc genhtml_legend=1 00:26:01.727 --rc geninfo_all_blocks=1 00:26:01.727 --rc geninfo_unexecuted_blocks=1 00:26:01.727 00:26:01.727 ' 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:01.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.727 --rc genhtml_branch_coverage=1 00:26:01.727 --rc genhtml_function_coverage=1 00:26:01.727 --rc genhtml_legend=1 00:26:01.727 --rc geninfo_all_blocks=1 00:26:01.727 --rc geninfo_unexecuted_blocks=1 00:26:01.727 00:26:01.727 ' 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:01.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.727 --rc genhtml_branch_coverage=1 00:26:01.727 --rc genhtml_function_coverage=1 00:26:01.727 --rc genhtml_legend=1 00:26:01.727 --rc geninfo_all_blocks=1 00:26:01.727 --rc geninfo_unexecuted_blocks=1 00:26:01.727 00:26:01.727 ' 00:26:01.727 13:57:33 scheduler -- scheduler/scheduler.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/isolate_cores.sh 00:26:01.727 13:57:33 scheduler -- scheduler/isolate_cores.sh@6 -- # xtrace_disable 00:26:01.727 13:57:33 scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:01.985 Moving 3973593 (PF_SUPERPRIV,PF_RANDOMIZE) to / from /user.slice/user-1001.slice/session-3.scope 00:26:01.985 Moving 3973593 (PF_SUPERPRIV,PF_RANDOMIZE) to /cpuset from / 00:26:01.985 13:57:33 scheduler -- scheduler/scheduler.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:26:02.920 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:02.920 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:03.878 13:57:35 scheduler -- scheduler/scheduler.sh@14 -- # run_test scheduler_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rpc.sh 00:26:03.878 13:57:35 scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:03.878 13:57:35 scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:03.878 13:57:35 scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:04.446 ************************************ 00:26:04.446 START TEST scheduler_rpc 00:26:04.446 ************************************ 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rpc.sh 00:26:04.446 * Looking for test storage... 00:26:04.446 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@345 -- # : 1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@353 -- # local d=1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@355 -- # echo 1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@353 -- # local d=2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@355 -- # echo 2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- scripts/common.sh@368 -- # return 0 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.446 --rc genhtml_branch_coverage=1 00:26:04.446 --rc genhtml_function_coverage=1 00:26:04.446 --rc genhtml_legend=1 00:26:04.446 --rc geninfo_all_blocks=1 00:26:04.446 --rc geninfo_unexecuted_blocks=1 00:26:04.446 00:26:04.446 ' 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.446 --rc genhtml_branch_coverage=1 00:26:04.446 --rc genhtml_function_coverage=1 00:26:04.446 --rc genhtml_legend=1 00:26:04.446 --rc geninfo_all_blocks=1 00:26:04.446 --rc geninfo_unexecuted_blocks=1 00:26:04.446 00:26:04.446 ' 00:26:04.446 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.446 --rc genhtml_branch_coverage=1 00:26:04.446 --rc genhtml_function_coverage=1 00:26:04.446 --rc genhtml_legend=1 00:26:04.446 --rc geninfo_all_blocks=1 00:26:04.446 --rc geninfo_unexecuted_blocks=1 00:26:04.447 00:26:04.447 ' 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:04.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.447 --rc genhtml_branch_coverage=1 00:26:04.447 --rc genhtml_function_coverage=1 00:26:04.447 --rc genhtml_legend=1 00:26:04.447 --rc geninfo_all_blocks=1 00:26:04.447 --rc geninfo_unexecuted_blocks=1 00:26:04.447 00:26:04.447 ' 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/rpc.sh@11 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/cgroups.sh@244 -- # check_cgroup 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/cgroups.sh@10 -- # echo 2 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/rpc.sh@13 -- # rpc=rpc_cmd 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- scheduler/rpc.sh@116 -- # run_test scheduler_opts scheduler_opts 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.447 13:57:35 scheduler.scheduler_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:04.447 ************************************ 00:26:04.447 START TEST scheduler_opts 00:26:04.447 ************************************ 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@1129 -- # scheduler_opts 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@44 -- # spdk_pid=3974934 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@45 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --wait-for-rpc 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@46 -- # waitforlisten 3974934 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@835 -- # '[' -z 3974934 ']' 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.447 13:57:35 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:04.447 [2024-12-05 13:57:35.850076] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:04.447 [2024-12-05 13:57:35.850135] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974934 ] 00:26:04.447 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.447 [2024-12-05 13:57:35.927123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:26:04.706 [2024-12-05 13:57:35.986529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.706 [2024-12-05 13:57:35.986555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:04.706 [2024-12-05 13:57:35.986629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:04.706 [2024-12-05 13:57:35.986649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:26:04.706 [2024-12-05 13:57:35.986669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:26:04.706 [2024-12-05 13:57:35.986687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:26:04.706 [2024-12-05 13:57:35.986710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:26:04.706 [2024-12-05 13:57:35.986714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.645 13:57:36 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.645 13:57:36 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@868 -- # return 0 00:26:05.645 13:57:36 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@49 -- # rpc_cmd framework_set_scheduler dynamic -p 424242 00:26:05.645 13:57:36 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.645 13:57:36 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.645 [2024-12-05 13:57:37.044885] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:05.645 [2024-12-05 13:57:37.044940] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:05.645 [2024-12-05 13:57:37.044957] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@50 -- # rpc_cmd framework_get_scheduler 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@50 -- # jq -r '. | select(.scheduler_name == "dynamic") | .scheduler_period' 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@50 -- # [[ 424242 -eq 424242 ]] 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@53 -- # rpc_cmd framework_set_scheduler dynamic --core-limit 42 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.645 [2024-12-05 13:57:37.129547] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:05.645 [2024-12-05 13:57:37.129579] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 42 00:26:05.645 [2024-12-05 13:57:37.129592] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@54 -- # rpc_cmd framework_get_scheduler 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@54 -- # jq -r '. | select(.scheduler_name == "dynamic") | .core_limit' 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.645 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@54 -- # [[ 42 -eq 42 ]] 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@57 -- # rpc_cmd framework_set_scheduler gscheduler 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@58 -- # rpc_cmd framework_get_scheduler 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@58 -- # jq -r .scheduler_name 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@58 -- # [[ gscheduler == \g\s\c\h\e\d\u\l\e\r ]] 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@59 -- # rpc_cmd framework_set_scheduler dynamic 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.905 [2024-12-05 13:57:37.417096] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:05.905 [2024-12-05 13:57:37.417158] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 42 00:26:05.905 [2024-12-05 13:57:37.417185] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@60 -- # rpc_cmd framework_get_scheduler 00:26:05.905 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@60 -- # jq -r '. | select(.scheduler_name == "dynamic") | .core_limit' 00:26:05.906 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.906 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:05.906 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.165 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@60 -- # [[ 42 -eq 42 ]] 00:26:06.165 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@63 -- # rpc_cmd framework_start_init 00:26:06.165 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.165 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:06.165 [2024-12-05 13:57:37.653794] 'OCF_Core' volume operations registered 00:26:06.165 [2024-12-05 13:57:37.653827] 'OCF_Cache' volume operations registered 00:26:06.165 [2024-12-05 13:57:37.657553] 'OCF Composite' volume operations registered 00:26:06.165 [2024-12-05 13:57:37.661348] 'SPDK_block_device' volume operations registered 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- scheduler/rpc.sh@66 -- # killprocess 3974934 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@954 -- # '[' -z 3974934 ']' 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@958 -- # kill -0 3974934 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@959 -- # uname 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3974934 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3974934' 00:26:06.426 killing process with pid 3974934 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@973 -- # kill 3974934 00:26:06.426 13:57:37 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@978 -- # wait 3974934 00:26:06.996 00:26:06.996 real 0m2.587s 00:26:06.996 user 0m9.745s 00:26:06.996 sys 0m0.527s 00:26:06.996 13:57:38 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.996 13:57:38 scheduler.scheduler_rpc.scheduler_opts -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 ************************************ 00:26:06.996 END TEST scheduler_opts 00:26:06.996 ************************************ 00:26:06.996 13:57:38 scheduler.scheduler_rpc -- scheduler/rpc.sh@117 -- # run_test static_as_default static_as_default 00:26:06.996 13:57:38 scheduler.scheduler_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:06.996 13:57:38 scheduler.scheduler_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.996 13:57:38 scheduler.scheduler_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:06.996 ************************************ 00:26:06.996 START TEST static_as_default 00:26:06.996 ************************************ 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@1129 -- # static_as_default 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@71 -- # spdk_pid=3975306 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@72 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@70 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --wait-for-rpc 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@73 -- # waitforlisten 3975306 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@835 -- # '[' -z 3975306 ']' 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.996 13:57:38 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:07.256 [2024-12-05 13:57:38.621403] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:07.256 [2024-12-05 13:57:38.621541] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975306 ] 00:26:07.256 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.256 [2024-12-05 13:57:38.775445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:26:07.515 [2024-12-05 13:57:38.836812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.515 [2024-12-05 13:57:38.836926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:07.515 [2024-12-05 13:57:38.837023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:26:07.515 [2024-12-05 13:57:38.836941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:26:07.515 [2024-12-05 13:57:38.836983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:26:07.515 [2024-12-05 13:57:38.836844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.515 [2024-12-05 13:57:38.837058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.515 [2024-12-05 13:57:38.837060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@868 -- # return 0 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@77 -- # rpc_cmd framework_get_scheduler 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@77 -- # jq -r '. | select(.scheduler_name == null)' 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.081 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.343 13:57:39 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@77 -- # [[ -n { 00:26:08.343 "scheduler_period": 0, 00:26:08.343 "isolated_core_mask": "0", 00:26:08.343 "scheduling_core": 1 00:26:08.343 } ]] 00:26:08.343 13:57:39 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@78 -- # rpc_cmd framework_start_init 00:26:08.343 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.343 13:57:39 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.686 [2024-12-05 13:57:39.942998] 'OCF_Core' volume operations registered 00:26:08.686 [2024-12-05 13:57:39.943049] 'OCF_Cache' volume operations registered 00:26:08.686 [2024-12-05 13:57:39.948432] 'OCF Composite' volume operations registered 00:26:08.686 [2024-12-05 13:57:39.953846] 'SPDK_block_device' volume operations registered 00:26:08.686 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.686 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@79 -- # rpc_cmd framework_get_scheduler 00:26:08.686 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@79 -- # jq -r .scheduler_name 00:26:08.686 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.686 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.686 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@79 -- # [[ static == \s\t\a\t\i\c ]] 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@84 -- # rpc_cmd framework_get_reactors 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@84 -- # jq -r '.reactors[0].lcore' 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@84 -- # main_cpu=1 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@85 -- # rpc_cmd framework_get_reactors 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@85 -- # jq -r '.reactors[1].lcore' 00:26:09.060 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@85 -- # other_cpu=2 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@89 -- # rpc_cmd framework_get_reactors 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@89 -- # jq -r '.reactors[] | select(.lcore == 2).lw_threads[0].id' 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.318 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@89 -- # thread_id=2 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@91 -- # rpc_cmd framework_set_scheduler dynamic 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 [2024-12-05 13:57:40.960823] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:09.575 [2024-12-05 13:57:40.960867] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:09.575 [2024-12-05 13:57:40.960883] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@92 -- # rpc_cmd framework_get_scheduler 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@92 -- # jq -r .scheduler_name 00:26:09.575 13:57:40 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.576 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@92 -- # [[ dynamic == \d\y\n\a\m\i\c ]] 00:26:09.576 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@95 -- # rpc_cmd framework_get_reactors 00:26:09.576 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.576 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.576 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@96 -- # jq -e -r '.reactors[] | select(.lcore == 1).lw_threads[] | select(.id == 2)' 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.832 { 00:26:09.832 "name": "nvmf_tgt_poll_group_000", 00:26:09.832 "id": 2, 00:26:09.832 "cpumask": "1e00000001e", 00:26:09.832 "elapsed": 201791168 00:26:09.832 } 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@98 -- # rpc_cmd framework_set_scheduler static 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@99 -- # jq -r .scheduler_name 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@99 -- # rpc_cmd framework_get_scheduler 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@99 -- # [[ static == \s\t\a\t\i\c ]] 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@103 -- # jq -e -r '.reactors[] | select(.lcore == 2).lw_threads[] | select(.id == 2)' 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@102 -- # rpc_cmd framework_get_reactors 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.832 { 00:26:09.832 "name": "nvmf_tgt_poll_group_000", 00:26:09.832 "id": 2, 00:26:09.832 "cpumask": "1e00000001e", 00:26:09.832 "elapsed": 219977674 00:26:09.832 } 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@106 -- # rpc_cmd framework_set_scheduler static --mappings 2:1 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@110 -- # jq -e -r '.reactors[] | select(.lcore == 1).lw_threads[] | select(.id == 2)' 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@109 -- # rpc_cmd framework_get_reactors 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.832 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.090 { 00:26:10.090 "name": "nvmf_tgt_poll_group_000", 00:26:10.090 "id": 2, 00:26:10.090 "cpumask": "1e00000001e", 00:26:10.090 "elapsed": 35768036 00:26:10.090 } 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- scheduler/rpc.sh@113 -- # killprocess 3975306 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@954 -- # '[' -z 3975306 ']' 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@958 -- # kill -0 3975306 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@959 -- # uname 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3975306 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3975306' 00:26:10.090 killing process with pid 3975306 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@973 -- # kill 3975306 00:26:10.090 13:57:41 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@978 -- # wait 3975306 00:26:10.657 00:26:10.657 real 0m3.638s 00:26:10.657 user 0m25.447s 00:26:10.657 sys 0m0.852s 00:26:10.657 13:57:42 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.657 13:57:42 scheduler.scheduler_rpc.static_as_default -- common/autotest_common.sh@10 -- # set +x 00:26:10.657 ************************************ 00:26:10.657 END TEST static_as_default 00:26:10.657 ************************************ 00:26:10.657 13:57:42 scheduler.scheduler_rpc -- scheduler/rpc.sh@118 -- # run_test framework_get_governor framework_get_governor 00:26:10.657 13:57:42 scheduler.scheduler_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:10.657 13:57:42 scheduler.scheduler_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.657 13:57:42 scheduler.scheduler_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:10.916 ************************************ 00:26:10.916 START TEST framework_get_governor 00:26:10.916 ************************************ 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@1129 -- # framework_get_governor 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@17 -- # spdk_pid=3975879 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@19 -- # waitforlisten 3975879 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@835 -- # '[' -z 3975879 ']' 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.916 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:10.916 [2024-12-05 13:57:42.263960] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:10.916 [2024-12-05 13:57:42.264034] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975879 ] 00:26:10.916 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.916 [2024-12-05 13:57:42.380298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:26:11.174 [2024-12-05 13:57:42.447028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.174 [2024-12-05 13:57:42.447059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.174 [2024-12-05 13:57:42.447120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:11.174 [2024-12-05 13:57:42.447142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:26:11.174 [2024-12-05 13:57:42.447165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:26:11.174 [2024-12-05 13:57:42.447187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:26:11.174 [2024-12-05 13:57:42.447217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:26:11.174 [2024-12-05 13:57:42.447222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.432 [2024-12-05 13:57:42.717311] 'OCF_Core' volume operations registered 00:26:11.432 [2024-12-05 13:57:42.717353] 'OCF_Cache' volume operations registered 00:26:11.432 [2024-12-05 13:57:42.722956] 'OCF Composite' volume operations registered 00:26:11.432 [2024-12-05 13:57:42.728570] 'SPDK_block_device' volume operations registered 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@868 -- # return 0 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@22 -- # jq -r .scheduler_name 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@22 -- # rpc_cmd framework_get_scheduler 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:11.690 13:57:42 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@22 -- # [[ static == \s\t\a\t\i\c ]] 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@23 -- # jq -r '.[]' 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@23 -- # rpc_cmd framework_get_governor 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@23 -- # [[ -z '' ]] 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@26 -- # rpc_cmd framework_set_scheduler gscheduler 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@27 -- # rpc_cmd framework_get_scheduler 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:11.690 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@27 -- # jq -r .scheduler_name 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@27 -- # [[ gscheduler == \g\s\c\h\e\d\u\l\e\r ]] 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@28 -- # rpc_cmd framework_get_governor 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@28 -- # jq -r .governor_name 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@28 -- # [[ dpdk_governor == \d\p\d\k\_\g\o\v\e\r\n\o\r ]] 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@31 -- # jq -r '.cores | length' 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@31 -- # rpc_cmd framework_get_governor 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@31 -- # [[ 8 -eq 8 ]] 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@32 -- # jq -r '.cores[0].lcore_id' 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@32 -- # rpc_cmd framework_get_governor 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.950 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@32 -- # [[ 1 -eq 1 ]] 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@33 -- # rpc_cmd framework_get_governor 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@33 -- # jq -r '.cores[0].current_frequency' 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@33 -- # [[ -n 2300001 ]] 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@36 -- # jq -r .module_specific.env 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@36 -- # rpc_cmd framework_get_governor 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.209 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@36 -- # [[ -n intel-pstate ]] 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- scheduler/rpc.sh@39 -- # killprocess 3975879 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@954 -- # '[' -z 3975879 ']' 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@958 -- # kill -0 3975879 00:26:12.468 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@959 -- # uname 00:26:12.469 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.469 13:57:43 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3975879 00:26:12.728 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.728 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.728 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3975879' 00:26:12.728 killing process with pid 3975879 00:26:12.728 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@973 -- # kill 3975879 00:26:12.728 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@978 -- # wait 3975879 00:26:13.297 00:26:13.297 real 0m2.390s 00:26:13.297 user 0m16.268s 00:26:13.297 sys 0m0.701s 00:26:13.297 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.297 13:57:44 scheduler.scheduler_rpc.framework_get_governor -- common/autotest_common.sh@10 -- # set +x 00:26:13.297 ************************************ 00:26:13.297 END TEST framework_get_governor 00:26:13.297 ************************************ 00:26:13.297 00:26:13.297 real 0m8.962s 00:26:13.297 user 0m51.591s 00:26:13.297 sys 0m2.241s 00:26:13.297 13:57:44 scheduler.scheduler_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.297 13:57:44 scheduler.scheduler_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:13.297 ************************************ 00:26:13.297 END TEST scheduler_rpc 00:26:13.297 ************************************ 00:26:13.297 13:57:44 scheduler -- scheduler/scheduler.sh@15 -- # run_test idle /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh 00:26:13.297 13:57:44 scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:13.297 13:57:44 scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.297 13:57:44 scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:13.297 ************************************ 00:26:13.297 START TEST idle 00:26:13.297 ************************************ 00:26:13.297 13:57:44 scheduler.idle -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh 00:26:13.557 * Looking for test storage... 00:26:13.557 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:26:13.557 13:57:44 scheduler.idle -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:13.557 13:57:44 scheduler.idle -- common/autotest_common.sh@1711 -- # lcov --version 00:26:13.557 13:57:44 scheduler.idle -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:13.557 13:57:44 scheduler.idle -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@344 -- # case "$op" in 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@345 -- # : 1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@365 -- # decimal 1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@353 -- # local d=1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@355 -- # echo 1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@366 -- # decimal 2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@353 -- # local d=2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@355 -- # echo 2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.557 13:57:44 scheduler.idle -- scripts/common.sh@368 -- # return 0 00:26:13.557 13:57:44 scheduler.idle -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.557 13:57:44 scheduler.idle -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:13.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.557 --rc genhtml_branch_coverage=1 00:26:13.557 --rc genhtml_function_coverage=1 00:26:13.557 --rc genhtml_legend=1 00:26:13.557 --rc geninfo_all_blocks=1 00:26:13.557 --rc geninfo_unexecuted_blocks=1 00:26:13.557 00:26:13.557 ' 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:13.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.558 --rc genhtml_branch_coverage=1 00:26:13.558 --rc genhtml_function_coverage=1 00:26:13.558 --rc genhtml_legend=1 00:26:13.558 --rc geninfo_all_blocks=1 00:26:13.558 --rc geninfo_unexecuted_blocks=1 00:26:13.558 00:26:13.558 ' 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:13.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.558 --rc genhtml_branch_coverage=1 00:26:13.558 --rc genhtml_function_coverage=1 00:26:13.558 --rc genhtml_legend=1 00:26:13.558 --rc geninfo_all_blocks=1 00:26:13.558 --rc geninfo_unexecuted_blocks=1 00:26:13.558 00:26:13.558 ' 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:13.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.558 --rc genhtml_branch_coverage=1 00:26:13.558 --rc genhtml_function_coverage=1 00:26:13.558 --rc genhtml_legend=1 00:26:13.558 --rc geninfo_all_blocks=1 00:26:13.558 --rc geninfo_unexecuted_blocks=1 00:26:13.558 00:26:13.558 ' 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@11 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:26:13.558 13:57:44 scheduler.idle -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:26:13.558 13:57:44 scheduler.idle -- scheduler/cgroups.sh@244 -- # check_cgroup 00:26:13.558 13:57:44 scheduler.idle -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:26:13.558 13:57:44 scheduler.idle -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:26:13.558 13:57:44 scheduler.idle -- scheduler/cgroups.sh@10 -- # echo 2 00:26:13.558 13:57:44 scheduler.idle -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@13 -- # trap 'killprocess "$spdk_pid"' EXIT 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@71 -- # idle 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@36 -- # local reactor_framework 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@37 -- # local reactors thread 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@38 -- # local thread_cpumask 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@39 -- # local threads 00:26:13.558 13:57:44 scheduler.idle -- scheduler/idle.sh@41 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@412 -- # [[ -e /proc//status ]] 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@416 -- # spdk_pid=3976329 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@418 -- # waitforlisten 3976329 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@835 -- # '[' -z 3976329 ']' 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.558 13:57:44 scheduler.idle -- scheduler/common.sh@415 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.558 13:57:44 scheduler.idle -- common/autotest_common.sh@10 -- # set +x 00:26:13.558 [2024-12-05 13:57:45.034442] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:13.558 [2024-12-05 13:57:45.034596] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3976329 ] 00:26:13.817 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.817 [2024-12-05 13:57:45.207623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:26:13.817 [2024-12-05 13:57:45.279721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.817 [2024-12-05 13:57:45.279837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.817 [2024-12-05 13:57:45.279936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:26:13.817 [2024-12-05 13:57:45.279858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:26:13.817 [2024-12-05 13:57:45.279893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:26:13.817 [2024-12-05 13:57:45.279756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.817 [2024-12-05 13:57:45.279977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.817 [2024-12-05 13:57:45.279976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:26:14.383 13:57:45 scheduler.idle -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.383 13:57:45 scheduler.idle -- common/autotest_common.sh@868 -- # return 0 00:26:14.383 13:57:45 scheduler.idle -- scheduler/common.sh@419 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic 00:26:15.767 [2024-12-05 13:57:47.237829] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:15.767 [2024-12-05 13:57:47.237886] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:15.767 [2024-12-05 13:57:47.237905] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:15.767 13:57:47 scheduler.idle -- scheduler/common.sh@420 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:26:16.717 [2024-12-05 13:57:48.053227] 'OCF_Core' volume operations registered 00:26:16.717 [2024-12-05 13:57:48.053266] 'OCF_Cache' volume operations registered 00:26:16.717 [2024-12-05 13:57:48.058297] 'OCF Composite' volume operations registered 00:26:16.717 [2024-12-05 13:57:48.063391] 'SPDK_block_device' volume operations registered 00:26:16.977 13:57:48 scheduler.idle -- scheduler/idle.sh@48 -- # get_thread_stats_current 00:26:16.977 13:57:48 scheduler.idle -- scheduler/common.sh@435 -- # xtrace_disable 00:26:16.977 13:57:48 scheduler.idle -- common/autotest_common.sh@10 -- # set +x 00:26:21.173 13:57:52 scheduler.idle -- scheduler/idle.sh@50 -- # xtrace_disable 00:26:21.173 13:57:52 scheduler.idle -- common/autotest_common.sh@10 -- # set +x 00:26:21.173 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:26:21.173 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_000 cpumask: 0x1e00000001e 00:26:21.173 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_001 cpumask: 0x1e00000001e 00:26:21.173 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_002 cpumask: 0x1e00000001e 00:26:21.173 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_003 cpumask: 0x1e00000001e 00:26:21.433 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_004 cpumask: 0x1e00000001e 00:26:21.433 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_005 cpumask: 0x1e00000001e 00:26:21.433 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_006 cpumask: 0x1e00000001e 00:26:21.433 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_007 cpumask: 0x1e00000001e 00:26:21.692 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:26:21.692 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:26:21.692 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:26:21.692 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:26:21.951 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:26:21.951 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:26:21.951 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:26:22.210 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:26:26.403 [load: 2%, idle: 855132288, busy: 25286194] app_thread is idle 00:26:26.403 [load: 0%, idle: 592460756, busy: 239642] nvmf_tgt_poll_group_000 is idle 00:26:26.403 [load: 0%, idle: 593097062, busy: 261444] nvmf_tgt_poll_group_001 is idle 00:26:26.403 [load: 0%, idle: 594429470, busy: 240606] nvmf_tgt_poll_group_002 is idle 00:26:26.403 [load: 0%, idle: 594266124, busy: 239000] nvmf_tgt_poll_group_003 is idle 00:26:26.403 [load: 0%, idle: 593216562, busy: 239554] nvmf_tgt_poll_group_004 is idle 00:26:26.403 [load: 0%, idle: 593352566, busy: 239030] nvmf_tgt_poll_group_005 is idle 00:26:26.403 [load: 0%, idle: 593057286, busy: 238780] nvmf_tgt_poll_group_006 is idle 00:26:26.403 [load: 0%, idle: 593266992, busy: 239602] nvmf_tgt_poll_group_007 is idle 00:26:26.403 [load: 0%, idle: 799636926, busy: 259204] iscsi_poll_group_1 is idle 00:26:26.403 [load: 0%, idle: 803501312, busy: 258510] iscsi_poll_group_2 is idle 00:26:26.403 [load: 0%, idle: 798842992, busy: 257360] iscsi_poll_group_3 is idle 00:26:26.403 [load: 0%, idle: 800185826, busy: 276678] iscsi_poll_group_4 is idle 00:26:26.403 [load: 0%, idle: 797629422, busy: 264016] iscsi_poll_group_37 is idle 00:26:26.403 [load: 0%, idle: 799996062, busy: 264236] iscsi_poll_group_38 is idle 00:26:26.403 [load: 0%, idle: 798059112, busy: 263698] iscsi_poll_group_39 is idle 00:26:26.403 [load: 0%, idle: 797794854, busy: 264432] iscsi_poll_group_40 is idle 00:26:26.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:26:26.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_000 cpumask: 0x1e00000001e 00:26:26.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_001 cpumask: 0x1e00000001e 00:26:26.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_002 cpumask: 0x1e00000001e 00:26:26.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_003 cpumask: 0x1e00000001e 00:26:26.660 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_004 cpumask: 0x1e00000001e 00:26:26.660 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_005 cpumask: 0x1e00000001e 00:26:26.660 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_006 cpumask: 0x1e00000001e 00:26:26.660 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_007 cpumask: 0x1e00000001e 00:26:26.917 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:26:26.917 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:26:26.917 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:26:27.175 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:26:27.175 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:26:27.175 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:26:27.175 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:26:27.435 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:26:31.625 [load: 3%, idle: 855958030, busy: 35040186] app_thread is idle 00:26:31.625 [load: 0%, idle: 593067762, busy: 313962] nvmf_tgt_poll_group_000 is idle 00:26:31.625 [load: 0%, idle: 594471460, busy: 329594] nvmf_tgt_poll_group_001 is idle 00:26:31.625 [load: 0%, idle: 593912758, busy: 313414] nvmf_tgt_poll_group_002 is idle 00:26:31.625 [load: 0%, idle: 593636796, busy: 313892] nvmf_tgt_poll_group_003 is idle 00:26:31.625 [load: 0%, idle: 593794672, busy: 313470] nvmf_tgt_poll_group_004 is idle 00:26:31.625 [load: 0%, idle: 593620126, busy: 314076] nvmf_tgt_poll_group_005 is idle 00:26:31.625 [load: 0%, idle: 593496142, busy: 313804] nvmf_tgt_poll_group_006 is idle 00:26:31.625 [load: 0%, idle: 594042258, busy: 313194] nvmf_tgt_poll_group_007 is idle 00:26:31.625 [load: 0%, idle: 800832454, busy: 356638] iscsi_poll_group_1 is idle 00:26:31.625 [load: 0%, idle: 803914740, busy: 339894] iscsi_poll_group_2 is idle 00:26:31.625 [load: 0%, idle: 799718336, busy: 338532] iscsi_poll_group_3 is idle 00:26:31.626 [load: 0%, idle: 801271404, busy: 338678] iscsi_poll_group_4 is idle 00:26:31.626 [load: 0%, idle: 798135796, busy: 348588] iscsi_poll_group_37 is idle 00:26:31.626 [load: 0%, idle: 802931374, busy: 348000] iscsi_poll_group_38 is idle 00:26:31.626 [load: 0%, idle: 798418902, busy: 347566] iscsi_poll_group_39 is idle 00:26:31.626 [load: 0%, idle: 798654792, busy: 348582] iscsi_poll_group_40 is idle 00:26:31.626 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:26:31.626 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_000 cpumask: 0x1e00000001e 00:26:31.626 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_001 cpumask: 0x1e00000001e 00:26:31.626 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_002 cpumask: 0x1e00000001e 00:26:31.626 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_003 cpumask: 0x1e00000001e 00:26:31.884 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_004 cpumask: 0x1e00000001e 00:26:31.884 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_005 cpumask: 0x1e00000001e 00:26:31.884 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_006 cpumask: 0x1e00000001e 00:26:32.144 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_007 cpumask: 0x1e00000001e 00:26:32.144 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:26:32.144 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:26:32.144 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:26:32.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:26:32.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:26:32.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:26:32.403 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:26:32.663 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:26:36.856 [load: 5%, idle: 852729264, busy: 45163212] app_thread is idle 00:26:36.856 [load: 0%, idle: 592789330, busy: 475382] nvmf_tgt_poll_group_000 is idle 00:26:36.856 [load: 0%, idle: 593021366, busy: 456076] nvmf_tgt_poll_group_001 is idle 00:26:36.856 [load: 0%, idle: 592874448, busy: 455760] nvmf_tgt_poll_group_002 is idle 00:26:36.856 [load: 0%, idle: 593823296, busy: 455726] nvmf_tgt_poll_group_003 is idle 00:26:36.856 [load: 0%, idle: 592763508, busy: 455784] nvmf_tgt_poll_group_004 is idle 00:26:36.856 [load: 0%, idle: 593106294, busy: 472656] nvmf_tgt_poll_group_005 is idle 00:26:36.856 [load: 0%, idle: 593401936, busy: 455884] nvmf_tgt_poll_group_006 is idle 00:26:36.856 [load: 0%, idle: 592064248, busy: 456182] nvmf_tgt_poll_group_007 is idle 00:26:36.856 [load: 0%, idle: 798252570, busy: 489210] iscsi_poll_group_1 is idle 00:26:36.856 [load: 0%, idle: 802461352, busy: 489664] iscsi_poll_group_2 is idle 00:26:36.856 [load: 0%, idle: 798034610, busy: 488460] iscsi_poll_group_3 is idle 00:26:36.856 [load: 0%, idle: 799363314, busy: 543924] iscsi_poll_group_4 is idle 00:26:36.856 [load: 0%, idle: 797118474, busy: 501946] iscsi_poll_group_37 is idle 00:26:36.856 [load: 0%, idle: 799224966, busy: 502462] iscsi_poll_group_38 is idle 00:26:36.856 [load: 0%, idle: 796454468, busy: 502142] iscsi_poll_group_39 is idle 00:26:36.856 [load: 0%, idle: 797540296, busy: 502962] iscsi_poll_group_40 is idle 00:26:36.856 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:26:36.856 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_000 cpumask: 0x1e00000001e 00:26:36.856 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_001 cpumask: 0x1e00000001e 00:26:36.856 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_002 cpumask: 0x1e00000001e 00:26:37.115 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_003 cpumask: 0x1e00000001e 00:26:37.115 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_004 cpumask: 0x1e00000001e 00:26:37.115 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_005 cpumask: 0x1e00000001e 00:26:37.115 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_006 cpumask: 0x1e00000001e 00:26:37.374 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_007 cpumask: 0x1e00000001e 00:26:37.374 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:26:37.374 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:26:37.634 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:26:37.634 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:26:37.634 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:26:37.634 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:26:37.894 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:26:37.894 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:26:42.089 [load: 5%, idle: 860319540, busy: 47827924] app_thread is idle 00:26:42.089 [load: 0%, idle: 596009356, busy: 501440] nvmf_tgt_poll_group_000 is idle 00:26:42.089 [load: 0%, idle: 597449406, busy: 502474] nvmf_tgt_poll_group_001 is idle 00:26:42.089 [load: 0%, idle: 596645876, busy: 521966] nvmf_tgt_poll_group_002 is idle 00:26:42.089 [load: 0%, idle: 596709122, busy: 501696] nvmf_tgt_poll_group_003 is idle 00:26:42.089 [load: 0%, idle: 597494572, busy: 501952] nvmf_tgt_poll_group_004 is idle 00:26:42.089 [load: 0%, idle: 596933638, busy: 501848] nvmf_tgt_poll_group_005 is idle 00:26:42.089 [load: 0%, idle: 596308250, busy: 501710] nvmf_tgt_poll_group_006 is idle 00:26:42.089 [load: 0%, idle: 596792044, busy: 519462] nvmf_tgt_poll_group_007 is idle 00:26:42.089 [load: 0%, idle: 804913662, busy: 544818] iscsi_poll_group_1 is idle 00:26:42.089 [load: 0%, idle: 807575096, busy: 545202] iscsi_poll_group_2 is idle 00:26:42.089 [load: 0%, idle: 803085086, busy: 542390] iscsi_poll_group_3 is idle 00:26:42.089 [load: 0%, idle: 805450906, busy: 577956] iscsi_poll_group_4 is idle 00:26:42.089 [load: 0%, idle: 803153314, busy: 562296] iscsi_poll_group_37 is idle 00:26:42.089 [load: 0%, idle: 805820882, busy: 558902] iscsi_poll_group_38 is idle 00:26:42.089 [load: 0%, idle: 803209356, busy: 558412] iscsi_poll_group_39 is idle 00:26:42.089 [load: 0%, idle: 802472296, busy: 559488] iscsi_poll_group_40 is idle 00:26:42.089 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:26:42.089 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_000 cpumask: 0x1e00000001e 00:26:42.089 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_001 cpumask: 0x1e00000001e 00:26:42.089 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_002 cpumask: 0x1e00000001e 00:26:42.348 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_003 cpumask: 0x1e00000001e 00:26:42.348 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_004 cpumask: 0x1e00000001e 00:26:42.348 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_005 cpumask: 0x1e00000001e 00:26:42.348 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_006 cpumask: 0x1e00000001e 00:26:42.607 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_007 cpumask: 0x1e00000001e 00:26:42.607 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:26:42.607 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:26:42.866 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:26:42.866 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:26:42.866 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:26:42.866 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:26:43.126 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:26:43.126 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:26:47.319 [load: 5%, idle: 854013630, busy: 46215224] app_thread is idle 00:26:47.319 [load: 0%, idle: 593275182, busy: 501842] nvmf_tgt_poll_group_000 is idle 00:26:47.319 [load: 0%, idle: 593724716, busy: 540122] nvmf_tgt_poll_group_001 is idle 00:26:47.319 [load: 0%, idle: 593452946, busy: 501746] nvmf_tgt_poll_group_002 is idle 00:26:47.319 [load: 0%, idle: 593159686, busy: 502002] nvmf_tgt_poll_group_003 is idle 00:26:47.319 [load: 0%, idle: 593972868, busy: 502104] nvmf_tgt_poll_group_004 is idle 00:26:47.319 [load: 0%, idle: 593827334, busy: 527082] nvmf_tgt_poll_group_005 is idle 00:26:47.319 [load: 0%, idle: 593378484, busy: 501694] nvmf_tgt_poll_group_006 is idle 00:26:47.319 [load: 0%, idle: 593949966, busy: 501216] nvmf_tgt_poll_group_007 is idle 00:26:47.319 [load: 0%, idle: 800753650, busy: 534562] iscsi_poll_group_1 is idle 00:26:47.319 [load: 0%, idle: 802734172, busy: 534368] iscsi_poll_group_2 is idle 00:26:47.319 [load: 0%, idle: 799553162, busy: 557076] iscsi_poll_group_3 is idle 00:26:47.319 [load: 0%, idle: 801736432, busy: 534584] iscsi_poll_group_4 is idle 00:26:47.319 [load: 0%, idle: 799394284, busy: 547884] iscsi_poll_group_37 is idle 00:26:47.319 [load: 0%, idle: 800677908, busy: 548940] iscsi_poll_group_38 is idle 00:26:47.319 [load: 0%, idle: 797446070, busy: 548142] iscsi_poll_group_39 is idle 00:26:47.319 [load: 0%, idle: 797967888, busy: 569152] iscsi_poll_group_40 is idle 00:26:47.319 13:58:18 scheduler.idle -- scheduler/idle.sh@1 -- # killprocess 3976329 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@954 -- # '[' -z 3976329 ']' 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@958 -- # kill -0 3976329 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@959 -- # uname 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3976329 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3976329' 00:26:47.319 killing process with pid 3976329 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@973 -- # kill 3976329 00:26:47.319 13:58:18 scheduler.idle -- common/autotest_common.sh@978 -- # wait 3976329 00:26:47.888 00:26:47.888 real 0m34.552s 00:26:47.888 user 1m15.741s 00:26:47.888 sys 0m3.738s 00:26:47.888 13:58:19 scheduler.idle -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.888 13:58:19 scheduler.idle -- common/autotest_common.sh@10 -- # set +x 00:26:47.888 ************************************ 00:26:47.888 END TEST idle 00:26:47.888 ************************************ 00:26:47.888 13:58:19 scheduler -- scheduler/scheduler.sh@17 -- # run_test dpdk_governor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh 00:26:47.888 13:58:19 scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:47.888 13:58:19 scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.888 13:58:19 scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:48.463 ************************************ 00:26:48.463 START TEST dpdk_governor 00:26:48.463 ************************************ 00:26:48.463 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh 00:26:48.463 * Looking for test storage... 00:26:48.463 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:26:48.463 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:48.463 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1711 -- # lcov --version 00:26:48.463 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:48.463 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:48.463 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.463 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@344 -- # case "$op" in 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@345 -- # : 1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@365 -- # decimal 1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@353 -- # local d=1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@355 -- # echo 1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@366 -- # decimal 2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@353 -- # local d=2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@355 -- # echo 2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scripts/common.sh@368 -- # return 0 00:26:48.464 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.464 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:48.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.464 --rc genhtml_branch_coverage=1 00:26:48.464 --rc genhtml_function_coverage=1 00:26:48.464 --rc genhtml_legend=1 00:26:48.464 --rc geninfo_all_blocks=1 00:26:48.464 --rc geninfo_unexecuted_blocks=1 00:26:48.464 00:26:48.464 ' 00:26:48.464 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:48.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.464 --rc genhtml_branch_coverage=1 00:26:48.464 --rc genhtml_function_coverage=1 00:26:48.464 --rc genhtml_legend=1 00:26:48.464 --rc geninfo_all_blocks=1 00:26:48.464 --rc geninfo_unexecuted_blocks=1 00:26:48.464 00:26:48.464 ' 00:26:48.464 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:48.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.464 --rc genhtml_branch_coverage=1 00:26:48.464 --rc genhtml_function_coverage=1 00:26:48.464 --rc genhtml_legend=1 00:26:48.464 --rc geninfo_all_blocks=1 00:26:48.464 --rc geninfo_unexecuted_blocks=1 00:26:48.464 00:26:48.464 ' 00:26:48.464 13:58:19 scheduler.dpdk_governor -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:48.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.464 --rc genhtml_branch_coverage=1 00:26:48.464 --rc genhtml_function_coverage=1 00:26:48.464 --rc genhtml_legend=1 00:26:48.464 --rc geninfo_all_blocks=1 00:26:48.464 --rc geninfo_unexecuted_blocks=1 00:26:48.464 00:26:48.464 ' 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/governor.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/cgroups.sh@244 -- # check_cgroup 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/cgroups.sh@10 -- # echo 2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/governor.sh@12 -- # trap 'killprocess "$spdk_pid" || :; restore_cpufreq' EXIT 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/governor.sh@157 -- # map_cpufreq 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@250 -- # cpufreq_drivers=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@250 -- # local -g cpufreq_drivers 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@251 -- # cpufreq_governors=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@251 -- # local -g cpufreq_governors 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@252 -- # cpufreq_base_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@252 -- # local -g cpufreq_base_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@253 -- # cpufreq_max_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@253 -- # local -g cpufreq_max_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@254 -- # cpufreq_min_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@254 -- # local -g cpufreq_min_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@255 -- # cpufreq_cur_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@255 -- # local -g cpufreq_cur_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@256 -- # cpufreq_is_turbo=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@256 -- # local -g cpufreq_is_turbo 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@257 -- # cpufreq_available_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@257 -- # local -g cpufreq_available_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@258 -- # cpufreq_available_governors=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@258 -- # local -g cpufreq_available_governors 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@259 -- # cpufreq_high_prio=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@259 -- # local -g cpufreq_high_prio 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@260 -- # cpufreq_non_turbo_ratio=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@260 -- # local -g cpufreq_non_turbo_ratio 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@261 -- # cpufreq_setspeed=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@261 -- # local -g cpufreq_setspeed 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@262 -- # cpuinfo_max_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@262 -- # local -g cpuinfo_max_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@263 -- # cpuinfo_min_freqs=() 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@263 -- # local -g cpuinfo_min_freqs 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@264 -- # local -g turbo_enabled=0 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@265 -- # local cpu cpu_idx 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.464 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=0 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq ]] 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq/base_frequency ]] 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000015 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_0 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_0[@]' 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_0 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_0[@]' 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.465 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 0 0xce 00:26:48.728 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.728 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.728 13:58:19 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.728 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=1 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq ]] 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq/base_frequency ]] 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999997 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_1 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_1[@]' 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_1 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_1[@]' 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 1 0xce 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=10 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq ]] 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq/base_frequency ]] 00:26:48.729 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=2300008 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_10 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_10[@]' 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_10 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_10[@]' 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 10 0xce 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=11 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq ]] 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq/base_frequency ]] 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_11 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_11[@]' 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_11 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_11[@]' 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.730 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 11 0xce 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=12 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq ]] 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq/base_frequency ]] 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_12 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_12[@]' 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_12 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_12[@]' 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 12 0xce 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.731 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=13 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq ]] 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq/base_frequency ]] 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1361410 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_13 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_13[@]' 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_13 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_13[@]' 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 13 0xce 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.732 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.733 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.733 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.733 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.733 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.733 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.733 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.994 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=14 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq ]] 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq/base_frequency ]] 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_14 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_14[@]' 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_14 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_14[@]' 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 14 0xce 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=15 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq ]] 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq/base_frequency ]] 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_15 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_15[@]' 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_15 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_15[@]' 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 15 0xce 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.995 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=16 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq ]] 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq/base_frequency ]] 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999811 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_16 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_16[@]' 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_16 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_16[@]' 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 16 0xce 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=17 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq ]] 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq/base_frequency ]] 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000112 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_17 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_17[@]' 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_17 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_17[@]' 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 17 0xce 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.996 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=18 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq ]] 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq/base_frequency ]] 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=2300007 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_18 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_18[@]' 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_18 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_18[@]' 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 18 0xce 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=19 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq ]] 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq/base_frequency ]] 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_19 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_19[@]' 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:48.997 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_19 00:26:48.998 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_19[@]' 00:26:48.998 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:48.998 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:48.998 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 19 0xce 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=2 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq ]] 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq/base_frequency ]] 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_2 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_2[@]' 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_2 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_2[@]' 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 2 0xce 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.261 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=20 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq ]] 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq/base_frequency ]] 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_20 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_20[@]' 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_20 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_20[@]' 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 20 0xce 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.262 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=21 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq ]] 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq/base_frequency ]] 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_21 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_21[@]' 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_21 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_21[@]' 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 21 0xce 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.263 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=22 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq ]] 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq/base_frequency ]] 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_22 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_22[@]' 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_22 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_22[@]' 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 22 0xce 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.264 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=23 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq ]] 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq/base_frequency ]] 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000037 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_23 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_23[@]' 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_23 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_23[@]' 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 23 0xce 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.265 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=24 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq ]] 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq/base_frequency ]] 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000129 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_24 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_24[@]' 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_24 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_24[@]' 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.266 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 24 0xce 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.526 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=25 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq ]] 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq/base_frequency ]] 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_25 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_25[@]' 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_25 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_25[@]' 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 25 0xce 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.527 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=26 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq ]] 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq/base_frequency ]] 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999868 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_26 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_26[@]' 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_26 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_26[@]' 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 26 0xce 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=27 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq ]] 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq/base_frequency ]] 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.528 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_27 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_27[@]' 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_27 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_27[@]' 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 27 0xce 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=28 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq ]] 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq/base_frequency ]] 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_28 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_28[@]' 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_28 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_28[@]' 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 28 0xce 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.529 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=29 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq ]] 00:26:49.530 13:58:20 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq/base_frequency ]] 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000028 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_29 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_29[@]' 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_29 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_29[@]' 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 29 0xce 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.530 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.531 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=3 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq ]] 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq/base_frequency ]] 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999800 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_3 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_3[@]' 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_3 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_3[@]' 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.791 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 3 0xce 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=30 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq ]] 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq/base_frequency ]] 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_30 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_30[@]' 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_30 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_30[@]' 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 30 0xce 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=31 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq ]] 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq/base_frequency ]] 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000014 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_31 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_31[@]' 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_31 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_31[@]' 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 31 0xce 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.792 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=32 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq ]] 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq/base_frequency ]] 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_32 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_32[@]' 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_32 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_32[@]' 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 32 0xce 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=33 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq ]] 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq/base_frequency ]] 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999957 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_33 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_33[@]' 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_33 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_33[@]' 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 33 0xce 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.793 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=34 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq ]] 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq/base_frequency ]] 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999652 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_34 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_34[@]' 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_34 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_34[@]' 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 34 0xce 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:49.794 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.054 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=35 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq ]] 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq/base_frequency ]] 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000224 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_35 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_35[@]' 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_35 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_35[@]' 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 35 0xce 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=36 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq ]] 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq/base_frequency ]] 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_36 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_36[@]' 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_36 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_36[@]' 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 36 0xce 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.055 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=37 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq ]] 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq/base_frequency ]] 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_37 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_37[@]' 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_37 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_37[@]' 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 37 0xce 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=38 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq ]] 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq/base_frequency ]] 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999858 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_38 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_38[@]' 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_38 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_38[@]' 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 38 0xce 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.056 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=39 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq ]] 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq/base_frequency ]] 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_39 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_39[@]' 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_39 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_39[@]' 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 39 0xce 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=4 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq ]] 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq/base_frequency ]] 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_4 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_4[@]' 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_4 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_4[@]' 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.057 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 4 0xce 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.318 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=40 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq ]] 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq/base_frequency ]] 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_40 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_40[@]' 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_40 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_40[@]' 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 40 0xce 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.319 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=41 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq ]] 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq/base_frequency ]] 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_41 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_41[@]' 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_41 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_41[@]' 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 41 0xce 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.320 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=42 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq ]] 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq/base_frequency ]] 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_42 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_42[@]' 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_42 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_42[@]' 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 42 0xce 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=43 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq ]] 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq/base_frequency ]] 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_43 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_43[@]' 00:26:50.321 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_43 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_43[@]' 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 43 0xce 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=44 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq ]] 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq/base_frequency ]] 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_44 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_44[@]' 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_44 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_44[@]' 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 44 0xce 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.322 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.323 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.646 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=45 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq ]] 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq/base_frequency ]] 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000185 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_45 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_45[@]' 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_45 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_45[@]' 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 45 0xce 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.647 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=46 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq ]] 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq/base_frequency ]] 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_46 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_46[@]' 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_46 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_46[@]' 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 46 0xce 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.648 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=47 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq ]] 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq/base_frequency ]] 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000017 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_47 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_47[@]' 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_47 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_47[@]' 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 47 0xce 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.649 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=48 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq ]] 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq/base_frequency ]] 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_48 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_48[@]' 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_48 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_48[@]' 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.650 13:58:21 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 48 0xce 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.650 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=49 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq ]] 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq/base_frequency ]] 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_49 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_49[@]' 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_49 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_49[@]' 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 49 0xce 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.651 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=5 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq ]] 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq/base_frequency ]] 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_5 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_5[@]' 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_5 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_5[@]' 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 5 0xce 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.652 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=50 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq ]] 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq/base_frequency ]] 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_50 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_50[@]' 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_50 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_50[@]' 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 50 0xce 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.653 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.914 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=51 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq ]] 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq/base_frequency ]] 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_51 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_51[@]' 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_51 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_51[@]' 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 51 0xce 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.915 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=52 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq ]] 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq/base_frequency ]] 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_52 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_52[@]' 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_52 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_52[@]' 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 52 0xce 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.916 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=53 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq ]] 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq/base_frequency ]] 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_53 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_53[@]' 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_53 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_53[@]' 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 53 0xce 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=54 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq ]] 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq/base_frequency ]] 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_54 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_54[@]' 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_54 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_54[@]' 00:26:50.917 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 54 0xce 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=55 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq ]] 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq/base_frequency ]] 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_55 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_55[@]' 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_55 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_55[@]' 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 55 0xce 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.918 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=56 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq ]] 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq/base_frequency ]] 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_56 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_56[@]' 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_56 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_56[@]' 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:50.919 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 56 0xce 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.180 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=57 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq ]] 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq/base_frequency ]] 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_57 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_57[@]' 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_57 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_57[@]' 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 57 0xce 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.181 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=58 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq ]] 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq/base_frequency ]] 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_58 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_58[@]' 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_58 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_58[@]' 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 58 0xce 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.182 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=59 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq ]] 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq/base_frequency ]] 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000927 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_59 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_59[@]' 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_59 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_59[@]' 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 59 0xce 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.183 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=6 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq ]] 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq/base_frequency ]] 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_6 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_6[@]' 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_6 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_6[@]' 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 6 0xce 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=60 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq ]] 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq/base_frequency ]] 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000679 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_60 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_60[@]' 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_60 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_60[@]' 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 60 0xce 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.184 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=61 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq ]] 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.185 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq/base_frequency ]] 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_61 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_61[@]' 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_61 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_61[@]' 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 61 0xce 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.446 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=62 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq ]] 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq/base_frequency ]] 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_62 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_62[@]' 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_62 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_62[@]' 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 62 0xce 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.447 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=63 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq ]] 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq/base_frequency ]] 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000118 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_63 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_63[@]' 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_63 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_63[@]' 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 63 0xce 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.448 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=64 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq ]] 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq/base_frequency ]] 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1001237 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_64 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_64[@]' 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_64 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_64[@]' 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 64 0xce 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=65 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq ]] 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq/base_frequency ]] 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_65 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_65[@]' 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_65 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_65[@]' 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 65 0xce 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.449 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=66 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq ]] 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq/base_frequency ]] 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=999841 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_66 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_66[@]' 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_66 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_66[@]' 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 66 0xce 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.450 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=67 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq ]] 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq/base_frequency ]] 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_67 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_67[@]' 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_67 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_67[@]' 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.712 13:58:22 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 67 0xce 00:26:51.712 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.712 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=68 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq ]] 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq/base_frequency ]] 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_68 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_68[@]' 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_68 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_68[@]' 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 68 0xce 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.713 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=69 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq ]] 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq/base_frequency ]] 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_69 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_69[@]' 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_69 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_69[@]' 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 69 0xce 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.714 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=7 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq ]] 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq/base_frequency ]] 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_7 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_7[@]' 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_7 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_7[@]' 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 7 0xce 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.715 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=70 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq ]] 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq/base_frequency ]] 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_70 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_70[@]' 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_70 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_70[@]' 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 70 0xce 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=71 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq ]] 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq/base_frequency ]] 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.716 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_71 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_71[@]' 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_71 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_71[@]' 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.717 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 71 0xce 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.976 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=8 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq ]] 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq/base_frequency ]] 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_8 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_8[@]' 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_8 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_8[@]' 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 8 0xce 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.977 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@267 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@268 -- # cpu_idx=9 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@269 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq ]] 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@270 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@271 -- # cpufreq_governors[cpu_idx]=powersave 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@274 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq/base_frequency ]] 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@275 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@278 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@279 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@280 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@282 -- # local -n available_governors=available_governors_cpu_9 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@283 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_9[@]' 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@284 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@286 -- # local -n available_freqs=available_freqs_cpu_9 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@287 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_9[@]' 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@289 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@300 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 9 0xce 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@302 -- # non_turbo_ratio=0x70a2cf3811700 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@303 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@304 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@305 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@306 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@310 -- # cpufreq_high_prio[cpu_idx]=0 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@311 -- # base_max_freq=2300000 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@313 -- # num_freqs=14 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@314 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@315 -- # (( num_freqs += 1 )) 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@316 -- # cpufreq_is_turbo[cpu_idx]=1 00:26:51.978 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@320 -- # available_freqs=() 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq = 0 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@323 -- # available_freqs[freq]=2300001 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2300000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2200000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2100000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=2000000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1900000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1800000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1700000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1600000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1500000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1400000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1300000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1200000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1100000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@322 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@325 -- # available_freqs[freq]=1000000 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq++ )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@321 -- # (( freq < num_freqs )) 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@366 -- # [[ -e /sys/devices/system/cpu/cpufreq/boost ]] 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@368 -- # [[ -e /sys/devices/system/cpu/intel_pstate/no_turbo ]] 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/common.sh@369 -- # turbo_enabled=1 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/governor.sh@159 -- # initial_main_core_governor=powersave 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/governor.sh@161 -- # verify_dpdk_governor 00:26:51.979 13:58:23 scheduler.dpdk_governor -- scheduler/governor.sh@60 -- # xtrace_disable 00:26:51.979 13:58:23 scheduler.dpdk_governor -- common/autotest_common.sh@10 -- # set +x 00:26:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.238 [2024-12-05 13:58:23.688273] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:26:52.238 [2024-12-05 13:58:23.688420] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3982341 ] 00:26:52.498 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.498 [2024-12-05 13:58:23.929282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:26:52.757 [2024-12-05 13:58:24.056721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.757 [2024-12-05 13:58:24.056819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.757 [2024-12-05 13:58:24.056968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:26:52.757 [2024-12-05 13:58:24.056924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:52.757 [2024-12-05 13:58:24.057014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:26:52.757 [2024-12-05 13:58:24.057072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:26:52.757 [2024-12-05 13:58:24.057126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:26:52.757 [2024-12-05 13:58:24.057135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.662 [2024-12-05 13:58:25.864856] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:54.662 [2024-12-05 13:58:25.864917] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:54.662 [2024-12-05 13:58:25.864935] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:55.229 [2024-12-05 13:58:26.676719] 'OCF_Core' volume operations registered 00:26:55.229 [2024-12-05 13:58:26.676764] 'OCF_Cache' volume operations registered 00:26:55.229 [2024-12-05 13:58:26.681782] 'OCF Composite' volume operations registered 00:26:55.229 [2024-12-05 13:58:26.686863] 'SPDK_block_device' volume operations registered 00:26:57.229 Waiting for samples... 00:26:59.156 MAIN DPDK cpu1 current frequency at 2100000 KHz (1000000-2300001 KHz), set frequency 2000000 KHz < 2100000 KHz 00:27:00.533 MAIN DPDK cpu1 current frequency at 2000000 KHz (1000000-2300001 KHz), set frequency 1800000 KHz < 2000000 KHz 00:27:02.439 MAIN DPDK cpu1 current frequency at 1799997 KHz (1000000-2300001 KHz), set frequency 1600000 KHz < 1800000 KHz 00:27:04.342 MAIN DPDK cpu1 current frequency at 1600000 KHz (1000000-2300001 KHz), set frequency 1500000 KHz < 1600000 KHz 00:27:05.718 MAIN DPDK cpu1 current frequency at 1499997 KHz (1000000-2300001 KHz), set frequency 1300000 KHz < 1500000 KHz 00:27:07.620 MAIN DPDK cpu1 current frequency at 1300000 KHz (1000000-2300001 KHz), set frequency 1100000 KHz < 1300000 KHz 00:27:08.992 MAIN DPDK cpu1 current frequency at 1099999 KHz (1000000-2300001 KHz), set frequency 1000000 KHz < 1100000 KHz 00:27:08.992 Main cpu1 frequency dropped by 90% 00:27:08.992 13:58:40 scheduler.dpdk_governor -- scheduler/governor.sh@1 -- # killprocess 3982341 00:27:08.992 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@954 -- # '[' -z 3982341 ']' 00:27:08.992 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@958 -- # kill -0 3982341 00:27:08.992 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@959 -- # uname 00:27:08.992 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.992 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3982341 00:27:09.249 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.250 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.250 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3982341' 00:27:09.250 killing process with pid 3982341 00:27:09.250 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@973 -- # kill 3982341 00:27:09.250 13:58:40 scheduler.dpdk_governor -- common/autotest_common.sh@978 -- # wait 3982341 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@1 -- # restore_cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@15 -- # local cpu 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 1 1000000 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=1 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 1 powersave 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=1 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 0 1000000 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=0 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 0 powersave 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=0 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 2 1000000 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=2 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 2 powersave 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=2 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 3 1000000 2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=3 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq 00:27:10.192 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 3 powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=3 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 4 1000000 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=4 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 4 powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=4 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 5 1000000 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=5 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 5 powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=5 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 6 1000000 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=6 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 6 powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=6 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 7 1000000 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=7 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 7 powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=7 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 8 1000000 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=8 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 8 powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=8 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 9 1000000 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=9 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.193 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 9 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=9 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 10 1000000 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=10 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 10 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=10 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 11 1000000 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=11 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 11 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=11 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 12 1000000 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=12 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 12 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=12 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 13 1000000 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=13 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 13 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=13 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 14 1000000 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=14 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 14 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=14 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 15 1000000 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=15 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 15 powersave 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=15 00:27:10.194 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 16 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=16 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 16 powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=16 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 17 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=17 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 17 powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=17 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 36 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=36 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 36 powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=36 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=37 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=37 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=38 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=38 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=39 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=39 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=40 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.195 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=40 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 41 1000000 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=41 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 41 powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=41 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 42 1000000 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=42 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 42 powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=42 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 43 1000000 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=43 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 43 powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=43 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 44 1000000 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=44 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 44 powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=44 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 45 1000000 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=45 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 45 powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=45 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 46 1000000 2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=46 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.196 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 46 powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=46 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 47 1000000 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=47 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 47 powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=47 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 48 1000000 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=48 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 48 powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=48 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 49 1000000 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=49 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 49 powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=49 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 50 1000000 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=50 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 50 powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=50 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 51 1000000 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=51 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 51 powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=51 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.197 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 52 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=52 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 52 powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=52 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 53 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=53 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 53 powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=53 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 18 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=18 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 18 powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=18 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=37 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=37 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=38 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=38 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=39 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=39 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=40 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.198 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=40 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 23 1000000 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=23 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 23 powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=23 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 24 1000000 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=24 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 24 powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=24 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 25 1000000 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=25 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 25 powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=25 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 26 1000000 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=26 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 26 powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=26 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 27 1000000 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=27 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 27 powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=27 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 28 1000000 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=28 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.199 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 28 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=28 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 29 1000000 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=29 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 29 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=29 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 30 1000000 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=30 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 30 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=30 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 31 1000000 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=31 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 31 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=31 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 32 1000000 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=32 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 32 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=32 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 33 1000000 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=33 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 33 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=33 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 34 1000000 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=34 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 34 powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=34 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.200 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 35 1000000 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=35 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 35 powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=35 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 54 1000000 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=54 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 54 powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=54 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 55 1000000 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=55 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 55 powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=55 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 56 1000000 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=56 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 56 powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=56 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 57 1000000 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=57 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 57 powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=57 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 58 1000000 2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=58 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.201 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 58 powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=58 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 59 1000000 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=59 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 59 powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=59 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 60 1000000 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=60 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 60 powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=60 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 61 1000000 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=61 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 61 powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=61 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 62 1000000 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=62 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 62 powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=62 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 63 1000000 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=63 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 63 powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=63 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.202 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 64 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=64 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 64 powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=64 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 65 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=65 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 65 powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=65 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 66 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=66 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 66 powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=66 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 67 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=67 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 67 powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=67 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 68 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=68 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 68 powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=68 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 69 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=69 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 69 powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=69 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 70 1000000 2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=70 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq 00:27:10.203 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 70 powersave 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=70 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@18 -- # set_cpufreq 71 1000000 2300001 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@374 -- # local cpu=71 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@375 -- # local min_freq=1000000 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@376 -- # local max_freq=2300001 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@377 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@380 -- # [[ -n intel_pstate ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@381 -- # [[ -n 1000000 ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@383 -- # case "${cpufreq_drivers[cpu]}" in 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # [[ -n 2300001 ]] 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@391 -- # (( max_freq >= min_freq )) 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@392 -- # echo 2300001 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@394 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@395 -- # echo 1000000 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/governor.sh@19 -- # set_cpufreq_governor 71 powersave 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@402 -- # local cpu=71 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@403 -- # local governor=powersave 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@404 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq 00:27:10.204 13:58:41 scheduler.dpdk_governor -- scheduler/common.sh@406 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:27:10.204 00:27:10.204 real 0m21.952s 00:27:10.204 user 0m41.355s 00:27:10.204 sys 0m7.667s 00:27:10.204 13:58:41 scheduler.dpdk_governor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.204 13:58:41 scheduler.dpdk_governor -- common/autotest_common.sh@10 -- # set +x 00:27:10.204 ************************************ 00:27:10.204 END TEST dpdk_governor 00:27:10.204 ************************************ 00:27:10.204 13:58:41 scheduler -- scheduler/scheduler.sh@18 -- # run_test interrupt_mode /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh 00:27:10.204 13:58:41 scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.204 13:58:41 scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.204 13:58:41 scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:10.463 ************************************ 00:27:10.463 START TEST interrupt_mode 00:27:10.463 ************************************ 00:27:10.463 13:58:41 scheduler.interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh 00:27:10.724 * Looking for test storage... 00:27:10.724 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:27:10.724 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:10.724 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:10.724 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:10.724 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.724 13:58:42 scheduler.interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.725 --rc genhtml_branch_coverage=1 00:27:10.725 --rc genhtml_function_coverage=1 00:27:10.725 --rc genhtml_legend=1 00:27:10.725 --rc geninfo_all_blocks=1 00:27:10.725 --rc geninfo_unexecuted_blocks=1 00:27:10.725 00:27:10.725 ' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.725 --rc genhtml_branch_coverage=1 00:27:10.725 --rc genhtml_function_coverage=1 00:27:10.725 --rc genhtml_legend=1 00:27:10.725 --rc geninfo_all_blocks=1 00:27:10.725 --rc geninfo_unexecuted_blocks=1 00:27:10.725 00:27:10.725 ' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.725 --rc genhtml_branch_coverage=1 00:27:10.725 --rc genhtml_function_coverage=1 00:27:10.725 --rc genhtml_legend=1 00:27:10.725 --rc geninfo_all_blocks=1 00:27:10.725 --rc geninfo_unexecuted_blocks=1 00:27:10.725 00:27:10.725 ' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:10.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.725 --rc genhtml_branch_coverage=1 00:27:10.725 --rc genhtml_function_coverage=1 00:27:10.725 --rc genhtml_legend=1 00:27:10.725 --rc geninfo_all_blocks=1 00:27:10.725 --rc geninfo_unexecuted_blocks=1 00:27:10.725 00:27:10.725 ' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/cgroups.sh@244 -- # check_cgroup 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/cgroups.sh@10 -- # echo 2 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@12 -- # trap 'killprocess "$spdk_pid"' EXIT 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@14 -- # cpus=() 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@14 -- # declare -a cpus 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@15 -- # cpus_to_collect=() 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@15 -- # declare -a cpus_to_collect 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@17 -- # parse_cpu_list /dev/fd/62 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@34 -- # local list=/dev/fd/62 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@17 -- # echo 1,2,3,4,37,38,39,40 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@35 -- # local elem elems cpus 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@38 -- # IFS=, 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@38 -- # read -ra elems 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@40 -- # (( 8 > 0 )) 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 1 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=1 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 2 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=2 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 3 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=3 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 4 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=4 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 37 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=37 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 38 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=38 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 39 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=39 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@43 -- # [[ 40 == *-* ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@49 -- # cpus[elem]=40 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@52 -- # printf '%u\n' 1 2 3 4 37 38 39 40 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@17 -- # fold_list_onto_array cpus 1 2 3 4 37 38 39 40 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@16 -- # local array=cpus 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@17 -- # local elem 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@19 -- # shift 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=1' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=1 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=2' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=2 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=3' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=3 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=4' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=4 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=37' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=37 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=38' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=38 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=39' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=39 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@21 -- # for elem in "$@" 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # eval 'cpus[elem]=40' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@22 -- # cpus[elem]=40 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@19 -- # cpus=("${cpus[@]}") 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/interrupt.sh@78 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@412 -- # [[ -e /proc//status ]] 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@416 -- # spdk_pid=3985901 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@418 -- # waitforlisten 3985901 00:27:10.725 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@415 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@835 -- # '[' -z 3985901 ']' 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.725 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:10.725 [2024-12-05 13:58:42.157873] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:27:10.725 [2024-12-05 13:58:42.157948] [ DPDK EAL parameters: scheduler --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985901 ] 00:27:10.725 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.984 [2024-12-05 13:58:42.290347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:27:10.984 [2024-12-05 13:58:42.358344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.984 [2024-12-05 13:58:42.358420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.984 [2024-12-05 13:58:42.358520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.984 [2024-12-05 13:58:42.358601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:27:10.984 [2024-12-05 13:58:42.358666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.984 [2024-12-05 13:58:42.358542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:27:10.984 [2024-12-05 13:58:42.358666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:27:10.984 [2024-12-05 13:58:42.358569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:27:10.984 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.984 13:58:42 scheduler.interrupt_mode -- common/autotest_common.sh@868 -- # return 0 00:27:10.984 13:58:42 scheduler.interrupt_mode -- scheduler/common.sh@419 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic 00:27:11.921 [2024-12-05 13:58:43.156848] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:27:11.921 [2024-12-05 13:58:43.156888] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:27:11.921 [2024-12-05 13:58:43.156906] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:27:11.921 13:58:43 scheduler.interrupt_mode -- scheduler/common.sh@420 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:27:12.490 [2024-12-05 13:58:43.954674] 'OCF_Core' volume operations registered 00:27:12.490 [2024-12-05 13:58:43.954717] 'OCF_Cache' volume operations registered 00:27:12.490 [2024-12-05 13:58:43.959743] 'OCF Composite' volume operations registered 00:27:12.490 [2024-12-05 13:58:43.964806] 'SPDK_block_device' volume operations registered 00:27:12.490 [2024-12-05 13:58:43.966021] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/interrupt.sh@80 -- # interrupt 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/interrupt.sh@22 -- # local busy_cpus 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/interrupt.sh@23 -- # local cpu thread 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/interrupt.sh@25 -- # local reactor_framework 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/interrupt.sh@27 -- # cpus_to_collect=("${cpus[@]}") 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/interrupt.sh@28 -- # collect_cpu_idle 00:27:12.490 13:58:43 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 8 > 0 )) 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' '1 2 3 4 37 38 39 40' 5 00:27:12.490 Collecting cpu idle stats (cpus: 1 2 3 4 37 38 39 40) for 5 seconds... 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 1 2 3 4 37 38 39 40 00:27:12.490 13:58:44 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:27:12.490 13:58:44 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 0 0 0 0 0 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('0' '0' '0' '0' '0') 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 0 0 0 0 0 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=0 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 0 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=0 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 1 '0 0 0 0 0' 0 0 00:27:19.055 * cpu1 idle samples: 0 0 0 0 0 (avg: 0%, median: 0%) 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 1 user 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=1 time=user 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_1 ]] 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_1 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:19.055 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 102 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=102 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=100 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 100 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 1 user 100 00:27:19.056 * cpu1 user usage: 100 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 1 '1298759 1298862 1298964 1299066 1299168' 00:27:19.056 * cpu1 user samples: 1298759 1298862 1298964 1299066 1299168 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 1 '595 595 595 595 595' 00:27:19.056 * cpu1 nice samples: 595 595 595 595 595 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 1 '139835 139835 139835 139835 139835' 00:27:19.056 * cpu1 system samples: 139835 139835 139835 139835 139835 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=100 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@673 -- # (( user_load <= 15 )) 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@677 -- # printf '* cpu%u is not idle\n' 1 00:27:19.056 * cpu1 is not idle 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@678 -- # is_idle[cpu]=0 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # get_spdk_proc_time 5 1 00:27:19.056 13:58:50 scheduler.interrupt_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:27:19.056 13:58:50 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:23.247 stime samples: 0 0 0 0 00:27:23.247 utime samples: 0 100 100 100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # user_spdk_load=100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@684 -- # (( user_spdk_load <= 15 )) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 100 100 100 100 100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('100' '100' '100' '100' '100') 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 100 100 100 100 100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=100 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '100 100 100 100 100' 100 100 00:27:23.247 * cpu2 idle samples: 100 100 100 100 100 (avg: 100%, median: 100%) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 2 user 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=2 time=user 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_2 ]] 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_2 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 2 user 0 00:27:23.247 * cpu2 user usage: 0 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 2 '859209 859209 859209 859209 859209' 00:27:23.247 * cpu2 user samples: 859209 859209 859209 859209 859209 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 2 '2033 2033 2033 2033 2033' 00:27:23.247 * cpu2 nice samples: 2033 2033 2033 2033 2033 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 2 '124538 124538 124538 124538 124538' 00:27:23.247 * cpu2 system samples: 124538 124538 124538 124538 124538 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 2 00:27:23.247 * cpu2 is idle 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 98 100 100 98 99 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('98' '100' '100' '98' '99') 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 98 100 100 98 99 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=99 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 99 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=99 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '98 100 100 98 99' 99 99 00:27:23.247 * cpu3 idle samples: 98 100 100 98 99 (avg: 99%, median: 99%) 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 3 user 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=3 time=user 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_3 ]] 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_3 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.247 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 3 user 0 00:27:23.248 * cpu3 user usage: 0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 3 '728536 728536 728536 728538 728538' 00:27:23.248 * cpu3 user samples: 728536 728536 728536 728538 728538 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 3 '509 509 509 509 509' 00:27:23.248 * cpu3 nice samples: 509 509 509 509 509 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 3 '96651 96651 96651 96651 96652' 00:27:23.248 * cpu3 system samples: 96651 96651 96651 96651 96652 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 3 00:27:23.248 * cpu3 is idle 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 99 98 97 98 98 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('99' '98' '97' '98' '98') 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 99 98 97 98 98 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=98 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 98 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=98 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '99 98 97 98 98' 98 98 00:27:23.248 * cpu4 idle samples: 99 98 97 98 98 (avg: 98%, median: 98%) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 4 user 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=4 time=user 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_4 ]] 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_4 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 4 user 1 00:27:23.248 * cpu4 user usage: 1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 4 '489282 489284 489286 489287 489288' 00:27:23.248 * cpu4 user samples: 489282 489284 489286 489287 489288 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 4 '309 309 309 309 309' 00:27:23.248 * cpu4 nice samples: 309 309 309 309 309 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 4 '108035 108035 108036 108037 108038' 00:27:23.248 * cpu4 system samples: 108035 108035 108036 108037 108038 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 4 00:27:23.248 * cpu4 is idle 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 100 100 100 100 100 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('100' '100' '100' '100' '100') 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 100 100 100 100 100 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=100 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 100 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=100 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 37 '100 100 100 100 100' 100 100 00:27:23.248 * cpu37 idle samples: 100 100 100 100 100 (avg: 100%, median: 100%) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 37 user 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=37 time=user 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_37 ]] 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_37 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 37 user 0 00:27:23.248 * cpu37 user usage: 0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 37 '219415 219415 219415 219415 219415' 00:27:23.248 * cpu37 user samples: 219415 219415 219415 219415 219415 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 37 '78 78 78 78 78' 00:27:23.248 * cpu37 nice samples: 78 78 78 78 78 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 37 '54947 54947 54947 54947 54947' 00:27:23.248 * cpu37 system samples: 54947 54947 54947 54947 54947 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 37 00:27:23.248 * cpu37 is idle 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 99 100 99 100 99 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('99' '100' '99' '100' '99') 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 99 100 99 100 99 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=99 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 99 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=99 00:27:23.248 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 38 '99 100 99 100 99' 99 99 00:27:23.248 * cpu38 idle samples: 99 100 99 100 99 (avg: 99%, median: 99%) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 38 user 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=38 time=user 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_38 ]] 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_38 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 38 user 1 00:27:23.249 * cpu38 user usage: 1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 38 '282120 282120 282120 282120 282121' 00:27:23.249 * cpu38 user samples: 282120 282120 282120 282120 282121 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 38 '82 82 82 82 82' 00:27:23.249 * cpu38 nice samples: 82 82 82 82 82 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 38 '73928 73928 73929 73929 73929' 00:27:23.249 * cpu38 system samples: 73928 73928 73929 73929 73929 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 38 00:27:23.249 * cpu38 is idle 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 100 100 100 99 100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('100' '100' '100' '99' '100') 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 100 100 100 99 100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 39 '100 100 100 99 100' 99 100 00:27:23.249 * cpu39 idle samples: 100 100 100 99 100 (avg: 99%, median: 100%) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 39 user 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=39 time=user 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_39 ]] 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_39 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 39 user 0 00:27:23.249 * cpu39 user usage: 0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 39 '302696 302696 302696 302697 302697' 00:27:23.249 * cpu39 user samples: 302696 302696 302696 302697 302697 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 39 '128 128 128 128 128' 00:27:23.249 * cpu39 nice samples: 128 128 128 128 128 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 39 '72325 72325 72325 72325 72325' 00:27:23.249 * cpu39 system samples: 72325 72325 72325 72325 72325 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 39 00:27:23.249 * cpu39 is idle 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 100 100 100 100 100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('100' '100' '100' '100' '100') 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 100 100 100 100 100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=100 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 40 '100 100 100 100 100' 100 100 00:27:23.249 * cpu40 idle samples: 100 100 100 100 100 (avg: 100%, median: 100%) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 40 user 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=40 time=user 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_40 ]] 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_40 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 40 user 0 00:27:23.249 * cpu40 user usage: 0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 40 '310346 310346 310346 310346 310346' 00:27:23.249 * cpu40 user samples: 310346 310346 310346 310346 310346 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 40 '20 20 20 20 20' 00:27:23.249 * cpu40 nice samples: 20 20 20 20 20 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 40 '82573 82573 82573 82573 82573' 00:27:23.249 * cpu40 system samples: 82573 82573 82573 82573 82573 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 40 00:27:23.249 * cpu40 is idle 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@31 -- # rpc_cmd framework_get_reactors 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@31 -- # jq -r '.reactors[]' 00:27:23.249 13:58:54 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.249 13:58:54 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:23.249 13:58:54 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.249 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@31 -- # reactor_framework='{ 00:27:23.249 "lcore": 1, 00:27:23.249 "tid": 3985901, 00:27:23.249 "busy": 768226258, 00:27:23.249 "idle": 27020111422, 00:27:23.249 "in_interrupt": false, 00:27:23.249 "irq": 5, 00:27:23.249 "sys": 19, 00:27:23.249 "usr": 1180, 00:27:23.249 "core_freq": 1100, 00:27:23.249 "lw_threads": [ 00:27:23.249 { 00:27:23.249 "name": "app_thread", 00:27:23.250 "id": 1, 00:27:23.250 "cpumask": "2", 00:27:23.250 "elapsed": 27794456000 00:27:23.250 } 00:27:23.250 ] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 2, 00:27:23.250 "tid": 3985954, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1841839710, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 1, 00:27:23.250 "sys": 3, 00:27:23.250 "usr": 81, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 3, 00:27:23.250 "tid": 3985955, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1850147826, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 0, 00:27:23.250 "sys": 5, 00:27:23.250 "usr": 85, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 4, 00:27:23.250 "tid": 3985956, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1850248226, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 1, 00:27:23.250 "sys": 12, 00:27:23.250 "usr": 94, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 37, 00:27:23.250 "tid": 3985957, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1850591784, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 0, 00:27:23.250 "sys": 0, 00:27:23.250 "usr": 82, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 38, 00:27:23.250 "tid": 3985958, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1844701424, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 1, 00:27:23.250 "sys": 22, 00:27:23.250 "usr": 135, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 39, 00:27:23.250 "tid": 3985959, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1860428504, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 0, 00:27:23.250 "sys": 6, 00:27:23.250 "usr": 89, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 } 00:27:23.250 { 00:27:23.250 "lcore": 40, 00:27:23.250 "tid": 3985960, 00:27:23.250 "busy": 0, 00:27:23.250 "idle": 1860885264, 00:27:23.250 "in_interrupt": true, 00:27:23.250 "irq": 1, 00:27:23.250 "sys": 1, 00:27:23.250 "usr": 82, 00:27:23.250 "core_freq": 1000, 00:27:23.250 "lw_threads": [] 00:27:23.250 }' 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id' 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id' 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id' 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.250 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 37) | .lw_threads[].id' 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 38) | .lw_threads[].id' 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 39) | .lw_threads[].id' 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:27:23.509 13:58:54 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 40) | .lw_threads[].id' 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@41 -- # (( is_idle[cpu] == 0 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@49 -- # busy_cpus=("${cpus[@]:1:3}") 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@49 -- # threads=() 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}" 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # mask_cpus 2 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@172 -- # fold_array_onto_string 2 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@27 -- # cpus=('2') 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@27 -- # local cpus 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@29 -- # local IFS=, 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@30 -- # echo 2 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@172 -- # printf '[%s]\n' 2 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # create_thread -n thread2 -m '[2]' -a 100 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread2 -m '[2]' -a 100 00:27:23.768 13:58:55 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.768 13:58:55 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:23.768 13:58:55 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # threads[cpu]=2 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu") 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/interrupt.sh@55 -- # collect_cpu_idle 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 1 > 0 )) 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5 00:27:23.768 Collecting cpu idle stats (cpus: 2) for 5 seconds... 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 2 00:27:23.768 13:58:55 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:27:23.768 13:58:55 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 100 15 0 0 0 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('100' '15' '0' '0' '0') 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 100 15 0 0 0 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=0 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 0 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=0 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '100 15 0 0 0' 23 0 00:27:30.353 * cpu2 idle samples: 100 15 0 0 0 (avg: 23%, median: 0%) 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 2 user 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=2 time=user 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_2 ]] 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_2 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 102 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=102 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=100 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 100 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 2 user 100 00:27:30.353 * cpu2 user usage: 100 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 2 '859223 859308 859410 859511 859613' 00:27:30.353 * cpu2 user samples: 859223 859308 859410 859511 859613 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 2 '2033 2033 2033 2033 2033' 00:27:30.353 * cpu2 nice samples: 2033 2033 2033 2033 2033 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 2 '124541 124541 124541 124541 124541' 00:27:30.353 * cpu2 system samples: 124541 124541 124541 124541 124541 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=100 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@673 -- # (( user_load <= 15 )) 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@677 -- # printf '* cpu%u is not idle\n' 2 00:27:30.353 * cpu2 is not idle 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@678 -- # is_idle[cpu]=0 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # get_spdk_proc_time 5 2 00:27:30.353 13:59:01 scheduler.interrupt_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:27:30.353 13:59:01 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:34.537 stime samples: 0 0 0 0 00:27:34.537 utime samples: 0 101 100 100 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # user_spdk_load=100 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@684 -- # (( user_spdk_load <= 15 )) 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]' 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # reactor_framework='{ 00:27:34.537 "lcore": 1, 00:27:34.537 "tid": 3985901, 00:27:34.537 "busy": 4020493938, 00:27:34.537 "idle": 48651865658, 00:27:34.537 "in_interrupt": false, 00:27:34.537 "irq": 9, 00:27:34.537 "sys": 21, 00:27:34.537 "usr": 2260, 00:27:34.537 "core_freq": 2300, 00:27:34.537 "lw_threads": [ 00:27:34.537 { 00:27:34.537 "name": "app_thread", 00:27:34.537 "id": 1, 00:27:34.537 "cpumask": "2", 00:27:34.537 "elapsed": 52678557702 00:27:34.537 } 00:27:34.537 ] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 2, 00:27:34.537 "tid": 3985954, 00:27:34.537 "busy": 20013396344, 00:27:34.537 "idle": 2531975084, 00:27:34.537 "in_interrupt": false, 00:27:34.537 "irq": 3, 00:27:34.537 "sys": 4, 00:27:34.537 "usr": 995, 00:27:34.537 "core_freq": 2300, 00:27:34.537 "lw_threads": [ 00:27:34.537 { 00:27:34.537 "name": "thread2", 00:27:34.537 "id": 2, 00:27:34.537 "cpumask": "4", 00:27:34.537 "elapsed": 20002968236 00:27:34.537 } 00:27:34.537 ] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 3, 00:27:34.537 "tid": 3985955, 00:27:34.537 "busy": 0, 00:27:34.537 "idle": 1850147826, 00:27:34.537 "in_interrupt": true, 00:27:34.537 "irq": 0, 00:27:34.537 "sys": 8, 00:27:34.537 "usr": 88, 00:27:34.537 "core_freq": 1000, 00:27:34.537 "lw_threads": [] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 4, 00:27:34.537 "tid": 3985956, 00:27:34.537 "busy": 0, 00:27:34.537 "idle": 1850248226, 00:27:34.537 "in_interrupt": true, 00:27:34.537 "irq": 2, 00:27:34.537 "sys": 19, 00:27:34.537 "usr": 119, 00:27:34.537 "core_freq": 1000, 00:27:34.537 "lw_threads": [] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 37, 00:27:34.537 "tid": 3985957, 00:27:34.537 "busy": 0, 00:27:34.537 "idle": 1850591784, 00:27:34.537 "in_interrupt": true, 00:27:34.537 "irq": 0, 00:27:34.537 "sys": 0, 00:27:34.537 "usr": 82, 00:27:34.537 "core_freq": 1000, 00:27:34.537 "lw_threads": [] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 38, 00:27:34.537 "tid": 3985958, 00:27:34.537 "busy": 0, 00:27:34.537 "idle": 1844701424, 00:27:34.537 "in_interrupt": true, 00:27:34.537 "irq": 2, 00:27:34.537 "sys": 24, 00:27:34.537 "usr": 143, 00:27:34.537 "core_freq": 1000, 00:27:34.537 "lw_threads": [] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 39, 00:27:34.537 "tid": 3985959, 00:27:34.537 "busy": 0, 00:27:34.537 "idle": 1860428504, 00:27:34.537 "in_interrupt": true, 00:27:34.537 "irq": 0, 00:27:34.537 "sys": 12, 00:27:34.537 "usr": 108, 00:27:34.537 "core_freq": 1000, 00:27:34.537 "lw_threads": [] 00:27:34.537 } 00:27:34.537 { 00:27:34.537 "lcore": 40, 00:27:34.537 "tid": 3985960, 00:27:34.537 "busy": 0, 00:27:34.537 "idle": 1860885264, 00:27:34.537 "in_interrupt": true, 00:27:34.537 "irq": 1, 00:27:34.537 "sys": 4, 00:27:34.537 "usr": 86, 00:27:34.537 "core_freq": 1000, 00:27:34.537 "lw_threads": [] 00:27:34.537 }' 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 2) | .lw_threads[] | select(.name == "thread2")' 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@57 -- # [[ -n { 00:27:34.537 "name": "thread2", 00:27:34.537 "id": 2, 00:27:34.537 "cpumask": "4", 00:27:34.537 "elapsed": 20002968236 00:27:34.537 } ]] 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 )) 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}" 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # mask_cpus 3 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@172 -- # fold_array_onto_string 3 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@27 -- # cpus=('3') 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@27 -- # local cpus 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@29 -- # local IFS=, 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@30 -- # echo 3 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@172 -- # printf '[%s]\n' 3 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # create_thread -n thread3 -m '[3]' -a 100 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread3 -m '[3]' -a 100 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # threads[cpu]=3 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu") 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/interrupt.sh@55 -- # collect_cpu_idle 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 1 > 0 )) 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5 00:27:34.537 Collecting cpu idle stats (cpus: 3) for 5 seconds... 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 3 00:27:34.537 13:59:05 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:27:34.537 13:59:05 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 57 0 0 0 0 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('57' '0' '0' '0' '0') 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 57 0 0 0 0 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=0 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 0 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=0 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '57 0 0 0 0' 11 0 00:27:41.104 * cpu3 idle samples: 57 0 0 0 0 (avg: 11%, median: 0%) 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 3 user 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=3 time=user 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_3 ]] 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_3 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:41.104 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 101 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=101 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=100 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 100 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 3 user 100 00:27:41.105 * cpu3 user usage: 100 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 3 '728585 728687 728788 728890 728991' 00:27:41.105 * cpu3 user samples: 728585 728687 728788 728890 728991 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 3 '509 509 509 509 509' 00:27:41.105 * cpu3 nice samples: 509 509 509 509 509 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 3 '96657 96657 96657 96657 96657' 00:27:41.105 * cpu3 system samples: 96657 96657 96657 96657 96657 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=100 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@673 -- # (( user_load <= 15 )) 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@677 -- # printf '* cpu%u is not idle\n' 3 00:27:41.105 * cpu3 is not idle 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@678 -- # is_idle[cpu]=0 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # get_spdk_proc_time 5 3 00:27:41.105 13:59:11 scheduler.interrupt_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:27:41.105 13:59:11 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:44.390 stime samples: 0 0 0 0 00:27:44.390 utime samples: 0 100 100 100 00:27:44.390 13:59:15 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # user_spdk_load=100 00:27:44.390 13:59:15 scheduler.interrupt_mode -- scheduler/common.sh@684 -- # (( user_spdk_load <= 15 )) 00:27:44.390 13:59:15 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]' 00:27:44.390 13:59:15 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors 00:27:44.390 13:59:15 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.390 13:59:15 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:44.648 13:59:15 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.648 13:59:15 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # reactor_framework='{ 00:27:44.648 "lcore": 1, 00:27:44.648 "tid": 3985901, 00:27:44.648 "busy": 4049205718, 00:27:44.648 "idle": 72757495636, 00:27:44.648 "in_interrupt": false, 00:27:44.648 "irq": 11, 00:27:44.648 "sys": 22, 00:27:44.648 "usr": 3308, 00:27:44.648 "core_freq": 2300, 00:27:44.648 "lw_threads": [ 00:27:44.648 { 00:27:44.648 "name": "app_thread", 00:27:44.648 "id": 1, 00:27:44.648 "cpumask": "2", 00:27:44.648 "elapsed": 76812888712 00:27:44.648 } 00:27:44.648 ] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 2, 00:27:44.648 "tid": 3985954, 00:27:44.648 "busy": 44168059260, 00:27:44.648 "idle": 2531975084, 00:27:44.648 "in_interrupt": false, 00:27:44.648 "irq": 5, 00:27:44.648 "sys": 4, 00:27:44.648 "usr": 2045, 00:27:44.648 "core_freq": 2300, 00:27:44.648 "lw_threads": [ 00:27:44.648 { 00:27:44.648 "name": "thread2", 00:27:44.648 "id": 2, 00:27:44.648 "cpumask": "4", 00:27:44.648 "elapsed": 44137299246 00:27:44.648 } 00:27:44.648 ] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 3, 00:27:44.648 "tid": 3985955, 00:27:44.648 "busy": 21393576814, 00:27:44.648 "idle": 2769275874, 00:27:44.648 "in_interrupt": false, 00:27:44.648 "irq": 2, 00:27:44.648 "sys": 8, 00:27:44.648 "usr": 1058, 00:27:44.648 "core_freq": 2300, 00:27:44.648 "lw_threads": [ 00:27:44.648 { 00:27:44.648 "name": "thread3", 00:27:44.648 "id": 3, 00:27:44.648 "cpumask": "8", 00:27:44.648 "elapsed": 21132680282 00:27:44.648 } 00:27:44.648 ] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 4, 00:27:44.648 "tid": 3985956, 00:27:44.648 "busy": 0, 00:27:44.648 "idle": 1850248226, 00:27:44.648 "in_interrupt": true, 00:27:44.648 "irq": 3, 00:27:44.648 "sys": 27, 00:27:44.648 "usr": 134, 00:27:44.648 "core_freq": 1000, 00:27:44.648 "lw_threads": [] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 37, 00:27:44.648 "tid": 3985957, 00:27:44.648 "busy": 0, 00:27:44.648 "idle": 1850591784, 00:27:44.648 "in_interrupt": true, 00:27:44.648 "irq": 0, 00:27:44.648 "sys": 2, 00:27:44.648 "usr": 86, 00:27:44.648 "core_freq": 1000, 00:27:44.648 "lw_threads": [] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 38, 00:27:44.648 "tid": 3985958, 00:27:44.648 "busy": 0, 00:27:44.648 "idle": 1844701424, 00:27:44.648 "in_interrupt": true, 00:27:44.648 "irq": 2, 00:27:44.648 "sys": 25, 00:27:44.648 "usr": 143, 00:27:44.648 "core_freq": 1000, 00:27:44.648 "lw_threads": [] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 39, 00:27:44.648 "tid": 3985959, 00:27:44.648 "busy": 0, 00:27:44.648 "idle": 1860428504, 00:27:44.648 "in_interrupt": true, 00:27:44.648 "irq": 1, 00:27:44.648 "sys": 14, 00:27:44.648 "usr": 117, 00:27:44.648 "core_freq": 1000, 00:27:44.648 "lw_threads": [] 00:27:44.648 } 00:27:44.648 { 00:27:44.648 "lcore": 40, 00:27:44.648 "tid": 3985960, 00:27:44.648 "busy": 0, 00:27:44.648 "idle": 1860885264, 00:27:44.648 "in_interrupt": true, 00:27:44.648 "irq": 2, 00:27:44.648 "sys": 9, 00:27:44.648 "usr": 90, 00:27:44.648 "core_freq": 1000, 00:27:44.648 "lw_threads": [] 00:27:44.648 }' 00:27:44.648 13:59:15 scheduler.interrupt_mode -- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 3) | .lw_threads[] | select(.name == "thread3")' 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@57 -- # [[ -n { 00:27:44.648 "name": "thread3", 00:27:44.648 "id": 3, 00:27:44.648 "cpumask": "8", 00:27:44.648 "elapsed": 21132680282 00:27:44.648 } ]] 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 )) 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}" 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # mask_cpus 4 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@172 -- # fold_array_onto_string 4 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@27 -- # cpus=('4') 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@27 -- # local cpus 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@29 -- # local IFS=, 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@30 -- # echo 4 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@172 -- # printf '[%s]\n' 4 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # create_thread -n thread4 -m '[4]' -a 100 00:27:44.648 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread4 -m '[4]' -a 100 00:27:44.648 13:59:16 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.648 13:59:16 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:44.906 13:59:16 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # threads[cpu]=4 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu") 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/interrupt.sh@55 -- # collect_cpu_idle 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 1 > 0 )) 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5 00:27:44.906 Collecting cpu idle stats (cpus: 4) for 5 seconds... 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 4 00:27:44.906 13:59:16 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:27:44.906 13:59:16 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 0 0 0 0 0 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('0' '0' '0' '0' '0') 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 0 0 0 0 0 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=0 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 0 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=0 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '0 0 0 0 0' 0 0 00:27:51.460 * cpu4 idle samples: 0 0 0 0 0 (avg: 0%, median: 0%) 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 4 user 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=4 time=user 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_4 ]] 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_4 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:27:51.460 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 101 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=101 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=100 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 100 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 4 user 100 00:27:51.461 * cpu4 user usage: 100 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 4 '489445 489545 489646 489746 489847' 00:27:51.461 * cpu4 user samples: 489445 489545 489646 489746 489847 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 4 '309 309 309 309 309' 00:27:51.461 * cpu4 nice samples: 309 309 309 309 309 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 4 '108063 108063 108063 108063 108063' 00:27:51.461 * cpu4 system samples: 108063 108063 108063 108063 108063 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=100 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@673 -- # (( user_load <= 15 )) 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@677 -- # printf '* cpu%u is not idle\n' 4 00:27:51.461 * cpu4 is not idle 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@678 -- # is_idle[cpu]=0 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # get_spdk_proc_time 5 4 00:27:51.461 13:59:22 scheduler.interrupt_mode -- scheduler/common.sh@764 -- # xtrace_disable 00:27:51.461 13:59:22 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:55.649 stime samples: 0 0 0 0 00:27:55.649 utime samples: 0 100 100 100 00:27:55.649 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@683 -- # user_spdk_load=100 00:27:55.649 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@684 -- # (( user_spdk_load <= 15 )) 00:27:55.649 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors 00:27:55.649 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]' 00:27:55.649 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.649 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:55.649 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.649 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@56 -- # reactor_framework='{ 00:27:55.649 "lcore": 1, 00:27:55.649 "tid": 3985901, 00:27:55.649 "busy": 4078075032, 00:27:55.649 "idle": 96863030034, 00:27:55.649 "in_interrupt": false, 00:27:55.649 "irq": 15, 00:27:55.649 "sys": 22, 00:27:55.649 "usr": 4356, 00:27:55.649 "core_freq": 2300, 00:27:55.649 "lw_threads": [ 00:27:55.649 { 00:27:55.649 "name": "app_thread", 00:27:55.649 "id": 1, 00:27:55.649 "cpumask": "2", 00:27:55.649 "elapsed": 100947298292 00:27:55.649 } 00:27:55.649 ] 00:27:55.649 } 00:27:55.649 { 00:27:55.649 "lcore": 2, 00:27:55.649 "tid": 3985954, 00:27:55.649 "busy": 68322696562, 00:27:55.649 "idle": 2531975084, 00:27:55.649 "in_interrupt": false, 00:27:55.649 "irq": 8, 00:27:55.649 "sys": 4, 00:27:55.649 "usr": 3095, 00:27:55.649 "core_freq": 2300, 00:27:55.649 "lw_threads": [ 00:27:55.649 { 00:27:55.649 "name": "thread2", 00:27:55.649 "id": 2, 00:27:55.649 "cpumask": "4", 00:27:55.649 "elapsed": 68271708826 00:27:55.649 } 00:27:55.649 ] 00:27:55.649 } 00:27:55.649 { 00:27:55.649 "lcore": 3, 00:27:55.649 "tid": 3985955, 00:27:55.649 "busy": 45548487974, 00:27:55.649 "idle": 2769275874, 00:27:55.649 "in_interrupt": false, 00:27:55.649 "irq": 6, 00:27:55.649 "sys": 8, 00:27:55.649 "usr": 2109, 00:27:55.649 "core_freq": 2300, 00:27:55.649 "lw_threads": [ 00:27:55.649 { 00:27:55.649 "name": "thread3", 00:27:55.649 "id": 3, 00:27:55.649 "cpumask": "8", 00:27:55.649 "elapsed": 45267089862 00:27:55.649 } 00:27:55.649 ] 00:27:55.649 } 00:27:55.649 { 00:27:55.649 "lcore": 4, 00:27:55.649 "tid": 3985956, 00:27:55.649 "busy": 22773968956, 00:27:55.649 "idle": 2769589192, 00:27:55.649 "in_interrupt": false, 00:27:55.649 "irq": 5, 00:27:55.649 "sys": 29, 00:27:55.649 "usr": 1173, 00:27:55.649 "core_freq": 2300, 00:27:55.649 "lw_threads": [ 00:27:55.649 { 00:27:55.649 "name": "thread4", 00:27:55.649 "id": 4, 00:27:55.649 "cpumask": "10", 00:27:55.649 "elapsed": 22262504832 00:27:55.649 } 00:27:55.649 ] 00:27:55.649 } 00:27:55.649 { 00:27:55.649 "lcore": 37, 00:27:55.649 "tid": 3985957, 00:27:55.649 "busy": 0, 00:27:55.649 "idle": 1850591784, 00:27:55.649 "in_interrupt": true, 00:27:55.649 "irq": 0, 00:27:55.649 "sys": 5, 00:27:55.649 "usr": 87, 00:27:55.649 "core_freq": 1000, 00:27:55.649 "lw_threads": [] 00:27:55.649 } 00:27:55.649 { 00:27:55.649 "lcore": 38, 00:27:55.649 "tid": 3985958, 00:27:55.649 "busy": 0, 00:27:55.649 "idle": 1844701424, 00:27:55.649 "in_interrupt": true, 00:27:55.649 "irq": 2, 00:27:55.649 "sys": 27, 00:27:55.649 "usr": 147, 00:27:55.649 "core_freq": 1000, 00:27:55.649 "lw_threads": [] 00:27:55.649 } 00:27:55.649 { 00:27:55.649 "lcore": 39, 00:27:55.649 "tid": 3985959, 00:27:55.650 "busy": 0, 00:27:55.650 "idle": 1860428504, 00:27:55.650 "in_interrupt": true, 00:27:55.650 "irq": 1, 00:27:55.650 "sys": 15, 00:27:55.650 "usr": 119, 00:27:55.650 "core_freq": 1000, 00:27:55.650 "lw_threads": [] 00:27:55.650 } 00:27:55.650 { 00:27:55.650 "lcore": 40, 00:27:55.650 "tid": 3985960, 00:27:55.650 "busy": 0, 00:27:55.650 "idle": 1860885264, 00:27:55.650 "in_interrupt": true, 00:27:55.650 "irq": 2, 00:27:55.650 "sys": 12, 00:27:55.650 "usr": 91, 00:27:55.650 "core_freq": 1000, 00:27:55.650 "lw_threads": [] 00:27:55.650 }' 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 4) | .lw_threads[] | select(.name == "thread4")' 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@57 -- # [[ -n { 00:27:55.650 "name": "thread4", 00:27:55.650 "id": 4, 00:27:55.650 "cpumask": "10", 00:27:55.650 "elapsed": 22262504832 00:27:55.650 } ]] 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 )) 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}" 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@64 -- # active_thread 2 0 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 2 0 00:27:55.650 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.650 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:55.650 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu") 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/interrupt.sh@66 -- # collect_cpu_idle 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 1 > 0 )) 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5 00:27:55.650 Collecting cpu idle stats (cpus: 2) for 5 seconds... 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 2 00:27:55.650 13:59:26 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:27:55.650 13:59:26 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 0 0 31 100 100 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('0' '0' '31' '100' '100') 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 0 0 31 100 100 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=31 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 31 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=31 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '0 0 31 100 100' 46 31 00:28:02.215 * cpu2 idle samples: 0 0 31 100 100 (avg: 46%, median: 31%) 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 2 user 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=2 time=user 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_2 ]] 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_2 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 2 user 0 00:28:02.215 * cpu2 user usage: 0 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 2 '862356 862456 862525 862525 862525' 00:28:02.215 * cpu2 user samples: 862356 862456 862525 862525 862525 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 2 '2033 2033 2033 2033 2033' 00:28:02.215 * cpu2 nice samples: 2033 2033 2033 2033 2033 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 2 '124541 124541 124542 124542 124542' 00:28:02.215 * cpu2 system samples: 124541 124541 124542 124542 124542 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 2 00:28:02.215 * cpu2 is idle 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]' 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors 00:28:02.215 13:59:32 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.215 13:59:32 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:02.215 13:59:32 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.215 13:59:32 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # reactor_framework='{ 00:28:02.215 "lcore": 1, 00:28:02.215 "tid": 3985901, 00:28:02.215 "busy": 4096180586, 00:28:02.215 "idle": 111606165718, 00:28:02.215 "in_interrupt": false, 00:28:02.215 "irq": 17, 00:28:02.215 "sys": 23, 00:28:02.215 "usr": 4998, 00:28:02.215 "core_freq": 2300, 00:28:02.215 "lw_threads": [ 00:28:02.215 { 00:28:02.215 "name": "app_thread", 00:28:02.215 "id": 1, 00:28:02.215 "cpumask": "2", 00:28:02.215 "elapsed": 115708519024 00:28:02.215 }, 00:28:02.215 { 00:28:02.215 "name": "thread2", 00:28:02.215 "id": 2, 00:28:02.215 "cpumask": "4", 00:28:02.215 "elapsed": 10338012340 00:28:02.215 } 00:28:02.215 ] 00:28:02.215 } 00:28:02.215 { 00:28:02.215 "lcore": 2, 00:28:02.215 "tid": 3985954, 00:28:02.215 "busy": 69013748764, 00:28:02.215 "idle": 8743501512, 00:28:02.215 "in_interrupt": true, 00:28:02.215 "irq": 9, 00:28:02.215 "sys": 7, 00:28:02.215 "usr": 3398, 00:28:02.215 "core_freq": 1000, 00:28:02.215 "lw_threads": [] 00:28:02.215 } 00:28:02.215 { 00:28:02.215 "lcore": 3, 00:28:02.215 "tid": 3985955, 00:28:02.215 "busy": 60041432994, 00:28:02.215 "idle": 2769275874, 00:28:02.215 "in_interrupt": false, 00:28:02.215 "irq": 7, 00:28:02.215 "sys": 8, 00:28:02.215 "usr": 2739, 00:28:02.215 "core_freq": 2300, 00:28:02.215 "lw_threads": [ 00:28:02.215 { 00:28:02.215 "name": "thread3", 00:28:02.215 "id": 3, 00:28:02.215 "cpumask": "8", 00:28:02.215 "elapsed": 60028310594 00:28:02.215 } 00:28:02.215 ] 00:28:02.215 } 00:28:02.215 { 00:28:02.215 "lcore": 4, 00:28:02.215 "tid": 3985956, 00:28:02.215 "busy": 37267111350, 00:28:02.215 "idle": 2769589192, 00:28:02.215 "in_interrupt": false, 00:28:02.215 "irq": 7, 00:28:02.215 "sys": 29, 00:28:02.215 "usr": 1803, 00:28:02.215 "core_freq": 2300, 00:28:02.215 "lw_threads": [ 00:28:02.215 { 00:28:02.215 "name": "thread4", 00:28:02.215 "id": 4, 00:28:02.215 "cpumask": "10", 00:28:02.215 "elapsed": 37023725564 00:28:02.215 } 00:28:02.215 ] 00:28:02.215 } 00:28:02.215 { 00:28:02.215 "lcore": 37, 00:28:02.215 "tid": 3985957, 00:28:02.215 "busy": 0, 00:28:02.215 "idle": 1850591784, 00:28:02.215 "in_interrupt": true, 00:28:02.215 "irq": 0, 00:28:02.215 "sys": 6, 00:28:02.215 "usr": 88, 00:28:02.215 "core_freq": 1000, 00:28:02.215 "lw_threads": [] 00:28:02.215 } 00:28:02.215 { 00:28:02.215 "lcore": 38, 00:28:02.215 "tid": 3985958, 00:28:02.215 "busy": 0, 00:28:02.215 "idle": 1844701424, 00:28:02.215 "in_interrupt": true, 00:28:02.215 "irq": 2, 00:28:02.215 "sys": 30, 00:28:02.215 "usr": 149, 00:28:02.215 "core_freq": 1000, 00:28:02.215 "lw_threads": [] 00:28:02.215 } 00:28:02.215 { 00:28:02.215 "lcore": 39, 00:28:02.215 "tid": 3985959, 00:28:02.215 "busy": 0, 00:28:02.215 "idle": 1860428504, 00:28:02.215 "in_interrupt": true, 00:28:02.216 "irq": 1, 00:28:02.216 "sys": 17, 00:28:02.216 "usr": 123, 00:28:02.216 "core_freq": 1000, 00:28:02.216 "lw_threads": [] 00:28:02.216 } 00:28:02.216 { 00:28:02.216 "lcore": 40, 00:28:02.216 "tid": 3985960, 00:28:02.216 "busy": 0, 00:28:02.216 "idle": 1860885264, 00:28:02.216 "in_interrupt": true, 00:28:02.216 "irq": 2, 00:28:02.216 "sys": 13, 00:28:02.216 "usr": 95, 00:28:02.216 "core_freq": 1000, 00:28:02.216 "lw_threads": [] 00:28:02.216 }' 00:28:02.216 13:59:32 scheduler.interrupt_mode -- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id' 00:28:02.216 13:59:32 scheduler.interrupt_mode -- scheduler/interrupt.sh@68 -- # [[ -z '' ]] 00:28:02.216 13:59:32 scheduler.interrupt_mode -- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread2")' 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/interrupt.sh@69 -- # [[ -n { 00:28:02.216 "name": "thread2", 00:28:02.216 "id": 2, 00:28:02.216 "cpumask": "4", 00:28:02.216 "elapsed": 10338012340 00:28:02.216 } ]] 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 )) 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}" 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/interrupt.sh@64 -- # active_thread 3 0 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 3 0 00:28:02.216 13:59:33 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.216 13:59:33 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:02.216 13:59:33 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu") 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/interrupt.sh@66 -- # collect_cpu_idle 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 1 > 0 )) 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5 00:28:02.216 Collecting cpu idle stats (cpus: 3) for 5 seconds... 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 3 00:28:02.216 13:59:33 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:28:02.216 13:59:33 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 0 0 72 100 100 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('0' '0' '72' '100' '100') 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 0 0 72 100 100 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=72 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 72 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=72 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '0 0 72 100 100' 54 72 00:28:08.777 * cpu3 idle samples: 0 0 72 100 100 (avg: 54%, median: 72%) 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 3 user 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=3 time=user 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_3 ]] 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_3 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 3 user 0 00:28:08.777 * cpu3 user usage: 0 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 3 '731325 731426 731453 731453 731453' 00:28:08.777 * cpu3 user samples: 731325 731426 731453 731453 731453 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 3 '509 509 509 509 509' 00:28:08.777 * cpu3 nice samples: 509 509 509 509 509 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 3 '96657 96657 96658 96658 96658' 00:28:08.777 * cpu3 system samples: 96657 96657 96658 96658 96658 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 3 00:28:08.777 * cpu3 is idle 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]' 00:28:08.777 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.777 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:08.777 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.777 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # reactor_framework='{ 00:28:08.777 "lcore": 1, 00:28:08.777 "tid": 3985901, 00:28:08.777 "busy": 4113982646, 00:28:08.777 "idle": 126343871406, 00:28:08.777 "in_interrupt": false, 00:28:08.777 "irq": 19, 00:28:08.777 "sys": 23, 00:28:08.777 "usr": 5638, 00:28:08.777 "core_freq": 2300, 00:28:08.777 "lw_threads": [ 00:28:08.777 { 00:28:08.777 "name": "app_thread", 00:28:08.777 "id": 1, 00:28:08.777 "cpumask": "2", 00:28:08.777 "elapsed": 130463999188 00:28:08.777 }, 00:28:08.777 { 00:28:08.777 "name": "thread2", 00:28:08.777 "id": 2, 00:28:08.777 "cpumask": "4", 00:28:08.777 "elapsed": 25093492504 00:28:08.777 }, 00:28:08.777 { 00:28:08.777 "name": "thread3", 00:28:08.777 "id": 3, 00:28:08.777 "cpumask": "8", 00:28:08.777 "elapsed": 11290503598 00:28:08.777 } 00:28:08.777 ] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 2, 00:28:08.777 "tid": 3985954, 00:28:08.777 "busy": 69013748764, 00:28:08.777 "idle": 8743501512, 00:28:08.777 "in_interrupt": true, 00:28:08.777 "irq": 9, 00:28:08.777 "sys": 11, 00:28:08.777 "usr": 3413, 00:28:08.777 "core_freq": 1000, 00:28:08.777 "lw_threads": [] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 3, 00:28:08.777 "tid": 3985955, 00:28:08.777 "busy": 60732358358, 00:28:08.777 "idle": 8060646312, 00:28:08.777 "in_interrupt": true, 00:28:08.777 "irq": 8, 00:28:08.777 "sys": 10, 00:28:08.777 "usr": 3000, 00:28:08.777 "core_freq": 1000, 00:28:08.777 "lw_threads": [] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 4, 00:28:08.777 "tid": 3985956, 00:28:08.777 "busy": 51990049802, 00:28:08.777 "idle": 2769589192, 00:28:08.777 "in_interrupt": false, 00:28:08.777 "irq": 8, 00:28:08.777 "sys": 29, 00:28:08.777 "usr": 2443, 00:28:08.777 "core_freq": 2300, 00:28:08.777 "lw_threads": [ 00:28:08.777 { 00:28:08.777 "name": "thread4", 00:28:08.777 "id": 4, 00:28:08.777 "cpumask": "10", 00:28:08.777 "elapsed": 51779205728 00:28:08.777 } 00:28:08.777 ] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 37, 00:28:08.777 "tid": 3985957, 00:28:08.777 "busy": 0, 00:28:08.777 "idle": 1850591784, 00:28:08.777 "in_interrupt": true, 00:28:08.777 "irq": 0, 00:28:08.777 "sys": 6, 00:28:08.777 "usr": 88, 00:28:08.777 "core_freq": 1000, 00:28:08.777 "lw_threads": [] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 38, 00:28:08.777 "tid": 3985958, 00:28:08.777 "busy": 0, 00:28:08.777 "idle": 1844701424, 00:28:08.777 "in_interrupt": true, 00:28:08.777 "irq": 3, 00:28:08.777 "sys": 34, 00:28:08.777 "usr": 157, 00:28:08.777 "core_freq": 1000, 00:28:08.777 "lw_threads": [] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 39, 00:28:08.777 "tid": 3985959, 00:28:08.777 "busy": 0, 00:28:08.777 "idle": 1860428504, 00:28:08.777 "in_interrupt": true, 00:28:08.777 "irq": 1, 00:28:08.777 "sys": 18, 00:28:08.777 "usr": 124, 00:28:08.777 "core_freq": 1000, 00:28:08.777 "lw_threads": [] 00:28:08.777 } 00:28:08.777 { 00:28:08.777 "lcore": 40, 00:28:08.777 "tid": 3985960, 00:28:08.777 "busy": 0, 00:28:08.778 "idle": 1860885264, 00:28:08.778 "in_interrupt": true, 00:28:08.778 "irq": 2, 00:28:08.778 "sys": 13, 00:28:08.778 "usr": 96, 00:28:08.778 "core_freq": 1000, 00:28:08.778 "lw_threads": [] 00:28:08.778 }' 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id' 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@68 -- # [[ -z '' ]] 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread3")' 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@69 -- # [[ -n { 00:28:08.778 "name": "thread3", 00:28:08.778 "id": 3, 00:28:08.778 "cpumask": "8", 00:28:08.778 "elapsed": 11290503598 00:28:08.778 } ]] 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 )) 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}" 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@64 -- # active_thread 4 0 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 4 0 00:28:08.778 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.778 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:08.778 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu") 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/interrupt.sh@66 -- # collect_cpu_idle 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@643 -- # (( 1 > 0 )) 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@645 -- # local time=5 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@646 -- # local cpu 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@647 -- # local samples 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # is_idle=() 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@648 -- # local -g is_idle 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@650 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5 00:28:08.778 Collecting cpu idle stats (cpus: 4) for 5 seconds... 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@653 -- # get_cpu_time 5 idle 0 1 4 00:28:08.778 13:59:39 scheduler.interrupt_mode -- scheduler/common.sh@500 -- # xtrace_disable 00:28:08.778 13:59:39 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@655 -- # local user_load load_median user_spdk_load 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@656 -- # for cpu in "${cpus_to_collect[@]}" 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@657 -- # samples=(${cpu_times[cpu]}) 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # calc_median 0 0 24 100 100 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # samples=('0' '0' '24' '100' '100') 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@744 -- # local samples samples_sorted 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@745 -- # local middle median sample 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n)) 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # printf '%s\n' 0 0 24 100 100 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@747 -- # sort -n 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@749 -- # middle=2 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@750 -- # (( 5 % 2 == 0 )) 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@753 -- # median=24 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@756 -- # echo 24 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@658 -- # load_median=24 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@659 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '0 0 24 100 100' 44 24 00:28:14.186 * cpu4 idle samples: 0 0 24 100 100 (avg: 44%, median: 24%) 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # cpu_usage_clk_tck 4 user 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@695 -- # local cpu=4 time=user 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@696 -- # local user nice system usage clk_delta 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@699 -- # [[ -v raw_samples_4 ]] 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@701 -- # local -n raw_samples=raw_samples_4 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@702 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@703 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@704 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@707 -- # case "$time" in 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@708 -- # : 0 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # getconf CLK_TCK 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@714 -- # usage=0 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@715 -- # usage=0 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@717 -- # printf %u 0 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@718 -- # printf '* cpu%u %s usage: %u\n' 4 user 0 00:28:14.186 * cpu4 user usage: 0 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@719 -- # printf '* cpu%u user samples: %s\n' 4 '491766 491867 491943 491943 491943' 00:28:14.186 * cpu4 user samples: 491766 491867 491943 491943 491943 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@720 -- # printf '* cpu%u nice samples: %s\n' 4 '309 309 309 309 309' 00:28:14.186 * cpu4 nice samples: 309 309 309 309 309 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@721 -- # printf '* cpu%u system samples: %s\n' 4 '108063 108063 108063 108063 108063' 00:28:14.186 * cpu4 system samples: 108063 108063 108063 108063 108063 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@669 -- # user_load=0 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@670 -- # (( samples[-1] >= 70 )) 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@671 -- # printf '* cpu%u is idle\n' 4 00:28:14.186 * cpu4 is idle 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@672 -- # is_idle[cpu]=1 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors 00:28:14.186 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]' 00:28:14.186 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.186 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.186 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.445 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@67 -- # reactor_framework='{ 00:28:14.445 "lcore": 1, 00:28:14.445 "tid": 3985901, 00:28:14.445 "busy": 4138680984, 00:28:14.445 "idle": 141090941354, 00:28:14.445 "in_interrupt": false, 00:28:14.445 "irq": 21, 00:28:14.445 "sys": 24, 00:28:14.445 "usr": 6279, 00:28:14.445 "core_freq": 1900, 00:28:14.445 "lw_threads": [ 00:28:14.445 { 00:28:14.445 "name": "app_thread", 00:28:14.445 "id": 1, 00:28:14.445 "cpumask": "2", 00:28:14.445 "elapsed": 145235700170 00:28:14.445 }, 00:28:14.445 { 00:28:14.445 "name": "thread2", 00:28:14.445 "id": 2, 00:28:14.445 "cpumask": "4", 00:28:14.445 "elapsed": 39865193486 00:28:14.445 }, 00:28:14.445 { 00:28:14.445 "name": "thread3", 00:28:14.445 "id": 3, 00:28:14.445 "cpumask": "8", 00:28:14.445 "elapsed": 26062204580 00:28:14.445 }, 00:28:14.445 { 00:28:14.445 "name": "thread4", 00:28:14.445 "id": 4, 00:28:14.445 "cpumask": "10", 00:28:14.445 "elapsed": 9979852970 00:28:14.445 } 00:28:14.445 ] 00:28:14.445 } 00:28:14.445 { 00:28:14.445 "lcore": 2, 00:28:14.445 "tid": 3985954, 00:28:14.445 "busy": 69013748764, 00:28:14.445 "idle": 8743501512, 00:28:14.445 "in_interrupt": true, 00:28:14.445 "irq": 9, 00:28:14.445 "sys": 16, 00:28:14.445 "usr": 3423, 00:28:14.445 "core_freq": 1000, 00:28:14.445 "lw_threads": [] 00:28:14.445 } 00:28:14.445 { 00:28:14.445 "lcore": 3, 00:28:14.445 "tid": 3985955, 00:28:14.445 "busy": 60732358358, 00:28:14.445 "idle": 8060646312, 00:28:14.445 "in_interrupt": true, 00:28:14.445 "irq": 8, 00:28:14.445 "sys": 13, 00:28:14.445 "usr": 3009, 00:28:14.445 "core_freq": 1000, 00:28:14.445 "lw_threads": [] 00:28:14.445 } 00:28:14.445 { 00:28:14.445 "lcore": 4, 00:28:14.445 "tid": 3985956, 00:28:14.445 "busy": 52450726762, 00:28:14.445 "idle": 9190600262, 00:28:14.445 "in_interrupt": true, 00:28:14.445 "irq": 9, 00:28:14.445 "sys": 30, 00:28:14.445 "usr": 2743, 00:28:14.445 "core_freq": 1000, 00:28:14.445 "lw_threads": [] 00:28:14.445 } 00:28:14.446 { 00:28:14.446 "lcore": 37, 00:28:14.446 "tid": 3985957, 00:28:14.446 "busy": 0, 00:28:14.446 "idle": 1850591784, 00:28:14.446 "in_interrupt": true, 00:28:14.446 "irq": 0, 00:28:14.446 "sys": 6, 00:28:14.446 "usr": 88, 00:28:14.446 "core_freq": 1000, 00:28:14.446 "lw_threads": [] 00:28:14.446 } 00:28:14.446 { 00:28:14.446 "lcore": 38, 00:28:14.446 "tid": 3985958, 00:28:14.446 "busy": 0, 00:28:14.446 "idle": 1844701424, 00:28:14.446 "in_interrupt": true, 00:28:14.446 "irq": 3, 00:28:14.446 "sys": 38, 00:28:14.446 "usr": 160, 00:28:14.446 "core_freq": 1000, 00:28:14.446 "lw_threads": [] 00:28:14.446 } 00:28:14.446 { 00:28:14.446 "lcore": 39, 00:28:14.446 "tid": 3985959, 00:28:14.446 "busy": 0, 00:28:14.446 "idle": 1860428504, 00:28:14.446 "in_interrupt": true, 00:28:14.446 "irq": 2, 00:28:14.446 "sys": 23, 00:28:14.446 "usr": 126, 00:28:14.446 "core_freq": 1000, 00:28:14.446 "lw_threads": [] 00:28:14.446 } 00:28:14.446 { 00:28:14.446 "lcore": 40, 00:28:14.446 "tid": 3985960, 00:28:14.446 "busy": 0, 00:28:14.446 "idle": 1860885264, 00:28:14.446 "in_interrupt": true, 00:28:14.446 "irq": 2, 00:28:14.446 "sys": 16, 00:28:14.446 "usr": 101, 00:28:14.446 "core_freq": 1000, 00:28:14.446 "lw_threads": [] 00:28:14.446 }' 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id' 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@68 -- # [[ -z '' ]] 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread4")' 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@69 -- # [[ -n { 00:28:14.446 "name": "thread4", 00:28:14.446 "id": 4, 00:28:14.446 "cpumask": "10", 00:28:14.446 "elapsed": 9979852970 00:28:14.446 } ]] 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 )) 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}" 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@74 -- # destroy_thread 2 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@492 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 2 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}" 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@74 -- # destroy_thread 3 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@492 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 3 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}" 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@74 -- # destroy_thread 4 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/common.sh@492 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 4 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.446 13:59:45 scheduler.interrupt_mode -- scheduler/interrupt.sh@1 -- # killprocess 3985901 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@954 -- # '[' -z 3985901 ']' 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@958 -- # kill -0 3985901 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@959 -- # uname 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.446 13:59:45 scheduler.interrupt_mode -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3985901 00:28:14.704 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:14.704 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:14.704 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3985901' 00:28:14.704 killing process with pid 3985901 00:28:14.704 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@973 -- # kill 3985901 00:28:14.704 [2024-12-05 13:59:46.003281] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:28:14.704 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@978 -- # wait 3985901 00:28:14.963 00:28:14.963 real 1m4.396s 00:28:14.963 user 2m40.387s 00:28:14.963 sys 0m1.741s 00:28:14.963 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.963 13:59:46 scheduler.interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.963 ************************************ 00:28:14.963 END TEST interrupt_mode 00:28:14.963 ************************************ 00:28:14.963 13:59:46 scheduler -- scheduler/scheduler.sh@19 -- # run_test core_isolating /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/core_isolating.sh 00:28:14.963 13:59:46 scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:14.963 13:59:46 scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.963 13:59:46 scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:15.531 ************************************ 00:28:15.531 START TEST core_isolating 00:28:15.531 ************************************ 00:28:15.532 13:59:46 scheduler.core_isolating -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/core_isolating.sh 00:28:15.532 * Looking for test storage... 00:28:15.532 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:28:15.532 13:59:46 scheduler.core_isolating -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:15.532 13:59:46 scheduler.core_isolating -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:15.532 13:59:46 scheduler.core_isolating -- common/autotest_common.sh@1711 -- # lcov --version 00:28:15.532 13:59:46 scheduler.core_isolating -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@344 -- # case "$op" in 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@345 -- # : 1 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@365 -- # decimal 1 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@353 -- # local d=1 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@355 -- # echo 1 00:28:15.532 13:59:46 scheduler.core_isolating -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@366 -- # decimal 2 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@353 -- # local d=2 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@355 -- # echo 2 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.532 13:59:47 scheduler.core_isolating -- scripts/common.sh@368 -- # return 0 00:28:15.532 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.532 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:15.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.532 --rc genhtml_branch_coverage=1 00:28:15.532 --rc genhtml_function_coverage=1 00:28:15.532 --rc genhtml_legend=1 00:28:15.532 --rc geninfo_all_blocks=1 00:28:15.532 --rc geninfo_unexecuted_blocks=1 00:28:15.532 00:28:15.532 ' 00:28:15.532 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:15.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.532 --rc genhtml_branch_coverage=1 00:28:15.532 --rc genhtml_function_coverage=1 00:28:15.532 --rc genhtml_legend=1 00:28:15.532 --rc geninfo_all_blocks=1 00:28:15.532 --rc geninfo_unexecuted_blocks=1 00:28:15.532 00:28:15.532 ' 00:28:15.532 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:15.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.532 --rc genhtml_branch_coverage=1 00:28:15.532 --rc genhtml_function_coverage=1 00:28:15.532 --rc genhtml_legend=1 00:28:15.532 --rc geninfo_all_blocks=1 00:28:15.532 --rc geninfo_unexecuted_blocks=1 00:28:15.532 00:28:15.532 ' 00:28:15.532 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:15.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.532 --rc genhtml_branch_coverage=1 00:28:15.532 --rc genhtml_function_coverage=1 00:28:15.532 --rc genhtml_legend=1 00:28:15.532 --rc geninfo_all_blocks=1 00:28:15.532 --rc geninfo_unexecuted_blocks=1 00:28:15.532 00:28:15.532 ' 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@11 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/cgroups.sh@244 -- # check_cgroup 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/cgroups.sh@10 -- # echo 2 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@13 -- # trap 'killprocess "$spdk_pid"' EXIT 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@15 -- # parse_cpu_list /dev/fd/62 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@34 -- # local list=/dev/fd/62 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@35 -- # local elem elems cpus 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@38 -- # IFS=, 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@38 -- # read -ra elems 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@15 -- # echo 1,2,3,4,37,38,39,40 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@40 -- # (( 8 > 0 )) 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 1 == *-* ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=1 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 2 == *-* ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=2 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 3 == *-* ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=3 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 4 == *-* ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=4 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 37 == *-* ]] 00:28:15.532 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=37 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 38 == *-* ]] 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=38 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 39 == *-* ]] 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=39 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@43 -- # [[ 40 == *-* ]] 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@49 -- # cpus[elem]=40 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@52 -- # printf '%u\n' 1 2 3 4 37 38 39 40 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@15 -- # fold_list_onto_array cpus 1 2 3 4 37 38 39 40 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@16 -- # local array=cpus 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@17 -- # local elem 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@19 -- # shift 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=1' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=1 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=2' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=2 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=3' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=3 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=4' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=4 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=37' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=37 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=38' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=38 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=39' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=39 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@21 -- # for elem in "$@" 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # eval 'cpus[elem]=40' 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@22 -- # cpus[elem]=40 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@17 -- # cpus=("${cpus[@]}") 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@18 -- # isolated_core=2 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@19 -- # scheduling_core=3 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/core_isolating.sh@79 -- # exec_under_static_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@424 -- # [[ -e /proc//status ]] 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@428 -- # spdk_pid=3994691 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@427 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:28:15.533 13:59:47 scheduler.core_isolating -- scheduler/common.sh@430 -- # waitforlisten 3994691 00:28:15.533 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@835 -- # '[' -z 3994691 ']' 00:28:15.533 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.533 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.533 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.533 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.533 13:59:47 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:15.792 [2024-12-05 13:59:47.088757] Starting SPDK v25.01-pre git sha1 62083ef48 / DPDK 24.03.0 initialization... 00:28:15.792 [2024-12-05 13:59:47.088847] [ DPDK EAL parameters: scheduler --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3994691 ] 00:28:15.793 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.793 [2024-12-05 13:59:47.227290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 8 00:28:16.066 [2024-12-05 13:59:47.325250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.066 [2024-12-05 13:59:47.325307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.066 [2024-12-05 13:59:47.325346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.066 [2024-12-05 13:59:47.325374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 37 00:28:16.066 [2024-12-05 13:59:47.325426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 38 00:28:16.066 [2024-12-05 13:59:47.325464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 39 00:28:16.066 [2024-12-05 13:59:47.325510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 40 00:28:16.066 [2024-12-05 13:59:47.325512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@868 -- # return 0 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@81 -- # set_scheduler_options 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@22 -- # local isolated_core_mask 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@24 -- # mask_cpus 2 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 2 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('2') 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 2 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 2 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@24 -- # isolated_core_mask='[2]' 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@25 -- # rpc_cmd scheduler_set_options --scheduling-core 3 -i '[2]' 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.634 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@83 -- # rpc_cmd framework_start_init 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.634 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.202 [2024-12-05 13:59:48.469388] 'OCF_Core' volume operations registered 00:28:17.202 [2024-12-05 13:59:48.469452] 'OCF_Cache' volume operations registered 00:28:17.202 [2024-12-05 13:59:48.477096] 'OCF Composite' volume operations registered 00:28:17.202 [2024-12-05 13:59:48.484723] 'SPDK_block_device' volume operations registered 00:28:17.202 [2024-12-05 13:59:48.486411] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:28:17.202 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.202 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@85 -- # set_scheduler_and_check_thread_status 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@29 -- # local isolated_thread_count tmp_count total_thread_count=0 idle_thread_count 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@30 -- # local core_mask reactors 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 1 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 1 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('1') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 1 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 1 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[1]' 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread1 -m '[1]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread1 -m '[1]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.203 2 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 2 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 2 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('2') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 2 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 2 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[2]' 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread2 -m '[2]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread2 -m '[2]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.203 3 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 3 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 3 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('3') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 3 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 3 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[3]' 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread3 -m '[3]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread3 -m '[3]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.203 4 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 4 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 4 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('4') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 4 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 4 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[4]' 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread4 -m '[4]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread4 -m '[4]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.203 5 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 37 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 37 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('37') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 37 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 37 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[37]' 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread37 -m '[37]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread37 -m '[37]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.203 6 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 38 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 38 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('38') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 38 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 38 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[38]' 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread38 -m '[38]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread38 -m '[38]' -a 0 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.203 7 00:28:17.203 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 39 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 39 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('39') 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 39 00:28:17.203 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 39 00:28:17.462 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[39]' 00:28:17.462 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread39 -m '[39]' -a 0 00:28:17.462 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread39 -m '[39]' -a 0 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.463 8 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@32 -- # for cpu in "${cpus[@]}" 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # mask_cpus 40 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # fold_array_onto_string 40 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # cpus=('40') 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@27 -- # local cpus 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@29 -- # local IFS=, 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@30 -- # echo 40 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@172 -- # printf '[%s]\n' 40 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@33 -- # core_mask='[40]' 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@34 -- # create_thread -n thread40 -m '[40]' -a 0 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/common.sh@488 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread40 -m '[40]' -a 0 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.463 9 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@38 -- # jq -r '.reactors[]' 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@38 -- # rpc_cmd framework_get_reactors 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.463 13:59:48 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@38 -- # reactors='{ 00:28:17.463 "lcore": 1, 00:28:17.463 "tid": 3994691, 00:28:17.463 "busy": 926635278, 00:28:17.463 "idle": 2408737184, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 24, 00:28:17.463 "usr": 120, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "app_thread", 00:28:17.463 "id": 1, 00:28:17.463 "cpumask": "2", 00:28:17.463 "elapsed": 3349615544 00:28:17.463 }, 00:28:17.463 { 00:28:17.463 "name": "thread1", 00:28:17.463 "id": 2, 00:28:17.463 "cpumask": "2", 00:28:17.463 "elapsed": 578836732 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 2, 00:28:17.463 "tid": 3994745, 00:28:17.463 "busy": 541710, 00:28:17.463 "idle": 3344157528, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 0, 00:28:17.463 "usr": 145, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread2", 00:28:17.463 "id": 3, 00:28:17.463 "cpumask": "4", 00:28:17.463 "elapsed": 495863640 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 3, 00:28:17.463 "tid": 3994746, 00:28:17.463 "busy": 508604, 00:28:17.463 "idle": 3339593296, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 0, 00:28:17.463 "usr": 143, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread3", 00:28:17.463 "id": 4, 00:28:17.463 "cpumask": "8", 00:28:17.463 "elapsed": 411534910 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 4, 00:28:17.463 "tid": 3994747, 00:28:17.463 "busy": 541698, 00:28:17.463 "idle": 3342002126, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 2, 00:28:17.463 "usr": 143, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread4", 00:28:17.463 "id": 5, 00:28:17.463 "cpumask": "10", 00:28:17.463 "elapsed": 346253354 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 37, 00:28:17.463 "tid": 3994748, 00:28:17.463 "busy": 176254, 00:28:17.463 "idle": 3344790238, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 0, 00:28:17.463 "usr": 145, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread37", 00:28:17.463 "id": 6, 00:28:17.463 "cpumask": "2000000000", 00:28:17.463 "elapsed": 263013078 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 38, 00:28:17.463 "tid": 3994749, 00:28:17.463 "busy": 226648, 00:28:17.463 "idle": 3346167014, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 1, 00:28:17.463 "usr": 144, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread38", 00:28:17.463 "id": 7, 00:28:17.463 "cpumask": "4000000000", 00:28:17.463 "elapsed": 192167314 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 39, 00:28:17.463 "tid": 3994750, 00:28:17.463 "busy": 259050, 00:28:17.463 "idle": 3348603134, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 4, 00:28:17.463 "usr": 142, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread39", 00:28:17.463 "id": 8, 00:28:17.463 "cpumask": "8000000000", 00:28:17.463 "elapsed": 117497866 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 } 00:28:17.463 { 00:28:17.463 "lcore": 40, 00:28:17.463 "tid": 3994751, 00:28:17.463 "busy": 244994, 00:28:17.463 "idle": 3351273164, 00:28:17.463 "in_interrupt": false, 00:28:17.463 "irq": 1, 00:28:17.463 "sys": 0, 00:28:17.463 "usr": 145, 00:28:17.463 "lw_threads": [ 00:28:17.463 { 00:28:17.463 "name": "thread40", 00:28:17.463 "id": 9, 00:28:17.463 "cpumask": "10000000000", 00:28:17.463 "elapsed": 52111088 00:28:17.463 } 00:28:17.463 ] 00:28:17.463 }' 00:28:17.463 13:59:48 scheduler.core_isolating -- scheduler/core_isolating.sh@39 -- # jq -r 'select(.lcore == 2) | .lw_threads | length' 00:28:17.722 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@39 -- # isolated_thread_count=1 00:28:17.722 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@40 -- # jq -r 'select(.lcore) | .lw_threads | length' 00:28:17.722 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@40 -- # awk '{s+=$1} END {print s}' 00:28:17.722 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@40 -- # echo '{ 00:28:17.722 "lcore": 1, 00:28:17.722 "tid": 3994691, 00:28:17.722 "busy": 926635278, 00:28:17.722 "idle": 2408737184, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 24, 00:28:17.722 "usr": 120, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "app_thread", 00:28:17.722 "id": 1, 00:28:17.722 "cpumask": "2", 00:28:17.722 "elapsed": 3349615544 00:28:17.722 }, 00:28:17.722 { 00:28:17.722 "name": "thread1", 00:28:17.722 "id": 2, 00:28:17.722 "cpumask": "2", 00:28:17.722 "elapsed": 578836732 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 2, 00:28:17.722 "tid": 3994745, 00:28:17.722 "busy": 541710, 00:28:17.722 "idle": 3344157528, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 0, 00:28:17.722 "usr": 145, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread2", 00:28:17.722 "id": 3, 00:28:17.722 "cpumask": "4", 00:28:17.722 "elapsed": 495863640 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 3, 00:28:17.722 "tid": 3994746, 00:28:17.722 "busy": 508604, 00:28:17.722 "idle": 3339593296, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 0, 00:28:17.722 "usr": 143, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread3", 00:28:17.722 "id": 4, 00:28:17.722 "cpumask": "8", 00:28:17.722 "elapsed": 411534910 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 4, 00:28:17.722 "tid": 3994747, 00:28:17.722 "busy": 541698, 00:28:17.722 "idle": 3342002126, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 2, 00:28:17.722 "usr": 143, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread4", 00:28:17.722 "id": 5, 00:28:17.722 "cpumask": "10", 00:28:17.722 "elapsed": 346253354 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 37, 00:28:17.722 "tid": 3994748, 00:28:17.722 "busy": 176254, 00:28:17.722 "idle": 3344790238, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 0, 00:28:17.722 "usr": 145, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread37", 00:28:17.722 "id": 6, 00:28:17.722 "cpumask": "2000000000", 00:28:17.722 "elapsed": 263013078 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 38, 00:28:17.722 "tid": 3994749, 00:28:17.722 "busy": 226648, 00:28:17.722 "idle": 3346167014, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 1, 00:28:17.722 "usr": 144, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread38", 00:28:17.722 "id": 7, 00:28:17.722 "cpumask": "4000000000", 00:28:17.722 "elapsed": 192167314 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 39, 00:28:17.722 "tid": 3994750, 00:28:17.722 "busy": 259050, 00:28:17.722 "idle": 3348603134, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 4, 00:28:17.722 "usr": 142, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread39", 00:28:17.722 "id": 8, 00:28:17.722 "cpumask": "8000000000", 00:28:17.722 "elapsed": 117497866 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 } 00:28:17.722 { 00:28:17.722 "lcore": 40, 00:28:17.722 "tid": 3994751, 00:28:17.722 "busy": 244994, 00:28:17.722 "idle": 3351273164, 00:28:17.722 "in_interrupt": false, 00:28:17.722 "irq": 1, 00:28:17.722 "sys": 0, 00:28:17.722 "usr": 145, 00:28:17.722 "lw_threads": [ 00:28:17.722 { 00:28:17.722 "name": "thread40", 00:28:17.722 "id": 9, 00:28:17.722 "cpumask": "10000000000", 00:28:17.722 "elapsed": 52111088 00:28:17.722 } 00:28:17.722 ] 00:28:17.722 }' 00:28:17.722 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@40 -- # total_thread_count=9 00:28:17.722 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@42 -- # rpc_cmd framework_set_scheduler dynamic 00:28:17.722 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.722 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.981 [2024-12-05 13:59:49.266807] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:28:17.981 [2024-12-05 13:59:49.266844] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:28:17.981 [2024-12-05 13:59:49.266859] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:28:17.981 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.981 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@44 -- # rpc_cmd framework_get_reactors 00:28:17.981 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.981 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:17.981 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@44 -- # jq -r '.reactors[]' 00:28:17.981 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.981 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@44 -- # reactors='{ 00:28:17.981 "lcore": 1, 00:28:17.981 "tid": 3994691, 00:28:17.981 "busy": 1077750158, 00:28:17.981 "idle": 3387738346, 00:28:17.981 "in_interrupt": false, 00:28:17.981 "irq": 1, 00:28:17.981 "sys": 25, 00:28:17.981 "usr": 162, 00:28:17.981 "core_freq": 2300, 00:28:17.981 "lw_threads": [ 00:28:17.981 { 00:28:17.981 "name": "app_thread", 00:28:17.981 "id": 1, 00:28:17.981 "cpumask": "2", 00:28:17.981 "elapsed": 4479605254 00:28:17.981 } 00:28:17.981 ] 00:28:17.981 } 00:28:17.981 { 00:28:17.981 "lcore": 2, 00:28:17.981 "tid": 3994745, 00:28:17.981 "busy": 541710, 00:28:17.981 "idle": 4474277060, 00:28:17.981 "in_interrupt": false, 00:28:17.981 "irq": 2, 00:28:17.981 "sys": 1, 00:28:17.981 "usr": 193, 00:28:17.981 "core_freq": 2300, 00:28:17.981 "lw_threads": [ 00:28:17.981 { 00:28:17.981 "name": "thread2", 00:28:17.981 "id": 3, 00:28:17.981 "cpumask": "4", 00:28:17.981 "elapsed": 1625853350 00:28:17.981 } 00:28:17.981 ] 00:28:17.981 } 00:28:17.981 { 00:28:17.981 "lcore": 3, 00:28:17.981 "tid": 3994746, 00:28:17.981 "busy": 508604, 00:28:17.981 "idle": 4468962406, 00:28:17.981 "in_interrupt": false, 00:28:17.981 "irq": 1, 00:28:17.981 "sys": 1, 00:28:17.981 "usr": 192, 00:28:17.981 "core_freq": 2300, 00:28:17.981 "lw_threads": [ 00:28:17.981 { 00:28:17.981 "name": "thread3", 00:28:17.981 "id": 4, 00:28:17.981 "cpumask": "8", 00:28:17.981 "elapsed": 1541524620 00:28:17.981 }, 00:28:17.981 { 00:28:17.981 "name": "thread4", 00:28:17.981 "id": 5, 00:28:17.981 "cpumask": "10", 00:28:17.981 "elapsed": 18315736 00:28:17.981 }, 00:28:17.981 { 00:28:17.981 "name": "thread39", 00:28:17.981 "id": 8, 00:28:17.981 "cpumask": "8000000000", 00:28:17.981 "elapsed": 18156646 00:28:17.981 }, 00:28:17.981 { 00:28:17.981 "name": "thread38", 00:28:17.981 "id": 7, 00:28:17.981 "cpumask": "4000000000", 00:28:17.981 "elapsed": 18153564 00:28:17.981 }, 00:28:17.981 { 00:28:17.981 "name": "thread37", 00:28:17.981 "id": 6, 00:28:17.981 "cpumask": "2000000000", 00:28:17.981 "elapsed": 18097290 00:28:17.981 }, 00:28:17.981 { 00:28:17.981 "name": "thread40", 00:28:17.981 "id": 9, 00:28:17.981 "cpumask": "10000000000", 00:28:17.981 "elapsed": 18083162 00:28:17.981 }, 00:28:17.981 { 00:28:17.981 "name": "thread1", 00:28:17.981 "id": 2, 00:28:17.981 "cpumask": "2", 00:28:17.981 "elapsed": 16565602 00:28:17.981 } 00:28:17.981 ] 00:28:17.981 } 00:28:17.981 { 00:28:17.981 "lcore": 4, 00:28:17.981 "tid": 3994747, 00:28:17.981 "busy": 541698, 00:28:17.981 "idle": 4475907710, 00:28:17.981 "in_interrupt": false, 00:28:17.981 "irq": 2, 00:28:17.981 "sys": 5, 00:28:17.981 "usr": 189, 00:28:17.981 "core_freq": 2300, 00:28:17.981 "lw_threads": [] 00:28:17.981 } 00:28:17.981 { 00:28:17.981 "lcore": 37, 00:28:17.981 "tid": 3994748, 00:28:17.981 "busy": 176254, 00:28:17.981 "idle": 4477312700, 00:28:17.981 "in_interrupt": false, 00:28:17.981 "irq": 1, 00:28:17.981 "sys": 1, 00:28:17.982 "usr": 193, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 38, 00:28:17.982 "tid": 3994749, 00:28:17.982 "busy": 226648, 00:28:17.982 "idle": 4478114382, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 1, 00:28:17.982 "usr": 193, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 39, 00:28:17.982 "tid": 3994750, 00:28:17.982 "busy": 259050, 00:28:17.982 "idle": 4478569684, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 4, 00:28:17.982 "usr": 190, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 40, 00:28:17.982 "tid": 3994751, 00:28:17.982 "busy": 244994, 00:28:17.982 "idle": 4479444270, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 0, 00:28:17.982 "usr": 194, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 }' 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@45 -- # isolated_thread_ids=($(echo "$reactors" | jq -r "select(.lcore == ${isolated_core}) | .lw_threads" | jq -r '.[].id')) 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@45 -- # jq -r 'select(.lcore == 2) | .lw_threads' 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@45 -- # echo '{ 00:28:17.982 "lcore": 1, 00:28:17.982 "tid": 3994691, 00:28:17.982 "busy": 1077750158, 00:28:17.982 "idle": 3387738346, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 25, 00:28:17.982 "usr": 162, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [ 00:28:17.982 { 00:28:17.982 "name": "app_thread", 00:28:17.982 "id": 1, 00:28:17.982 "cpumask": "2", 00:28:17.982 "elapsed": 4479605254 00:28:17.982 } 00:28:17.982 ] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 2, 00:28:17.982 "tid": 3994745, 00:28:17.982 "busy": 541710, 00:28:17.982 "idle": 4474277060, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 2, 00:28:17.982 "sys": 1, 00:28:17.982 "usr": 193, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [ 00:28:17.982 { 00:28:17.982 "name": "thread2", 00:28:17.982 "id": 3, 00:28:17.982 "cpumask": "4", 00:28:17.982 "elapsed": 1625853350 00:28:17.982 } 00:28:17.982 ] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 3, 00:28:17.982 "tid": 3994746, 00:28:17.982 "busy": 508604, 00:28:17.982 "idle": 4468962406, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 1, 00:28:17.982 "usr": 192, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [ 00:28:17.982 { 00:28:17.982 "name": "thread3", 00:28:17.982 "id": 4, 00:28:17.982 "cpumask": "8", 00:28:17.982 "elapsed": 1541524620 00:28:17.982 }, 00:28:17.982 { 00:28:17.982 "name": "thread4", 00:28:17.982 "id": 5, 00:28:17.982 "cpumask": "10", 00:28:17.982 "elapsed": 18315736 00:28:17.982 }, 00:28:17.982 { 00:28:17.982 "name": "thread39", 00:28:17.982 "id": 8, 00:28:17.982 "cpumask": "8000000000", 00:28:17.982 "elapsed": 18156646 00:28:17.982 }, 00:28:17.982 { 00:28:17.982 "name": "thread38", 00:28:17.982 "id": 7, 00:28:17.982 "cpumask": "4000000000", 00:28:17.982 "elapsed": 18153564 00:28:17.982 }, 00:28:17.982 { 00:28:17.982 "name": "thread37", 00:28:17.982 "id": 6, 00:28:17.982 "cpumask": "2000000000", 00:28:17.982 "elapsed": 18097290 00:28:17.982 }, 00:28:17.982 { 00:28:17.982 "name": "thread40", 00:28:17.982 "id": 9, 00:28:17.982 "cpumask": "10000000000", 00:28:17.982 "elapsed": 18083162 00:28:17.982 }, 00:28:17.982 { 00:28:17.982 "name": "thread1", 00:28:17.982 "id": 2, 00:28:17.982 "cpumask": "2", 00:28:17.982 "elapsed": 16565602 00:28:17.982 } 00:28:17.982 ] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 4, 00:28:17.982 "tid": 3994747, 00:28:17.982 "busy": 541698, 00:28:17.982 "idle": 4475907710, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 2, 00:28:17.982 "sys": 5, 00:28:17.982 "usr": 189, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 37, 00:28:17.982 "tid": 3994748, 00:28:17.982 "busy": 176254, 00:28:17.982 "idle": 4477312700, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 1, 00:28:17.982 "usr": 193, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 38, 00:28:17.982 "tid": 3994749, 00:28:17.982 "busy": 226648, 00:28:17.982 "idle": 4478114382, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 1, 00:28:17.982 "usr": 193, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 39, 00:28:17.982 "tid": 3994750, 00:28:17.982 "busy": 259050, 00:28:17.982 "idle": 4478569684, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 4, 00:28:17.982 "usr": 190, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 } 00:28:17.982 { 00:28:17.982 "lcore": 40, 00:28:17.982 "tid": 3994751, 00:28:17.982 "busy": 244994, 00:28:17.982 "idle": 4479444270, 00:28:17.982 "in_interrupt": false, 00:28:17.982 "irq": 1, 00:28:17.982 "sys": 0, 00:28:17.982 "usr": 194, 00:28:17.982 "core_freq": 2300, 00:28:17.982 "lw_threads": [] 00:28:17.982 }' 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@45 -- # jq -r '.[].id' 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@48 -- # jq -r 'select(.lcore == 2) | .lw_threads | length' 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@48 -- # tmp_count=1 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@49 -- # (( isolated_thread_count == tmp_count )) 00:28:17.982 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@52 -- # jq -r 'select(.lcore == 3) | .lw_threads| length' 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@52 -- # idle_thread_count=7 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@53 -- # tmp_count=8 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@56 -- # for thread_id in "${isolated_thread_ids[@]}" 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@57 -- # active_thread 3 95 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 3 95 00:28:18.241 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.241 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.241 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@60 -- # jq -r '.reactors[]' 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@60 -- # rpc_cmd framework_get_reactors 00:28:18.241 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.241 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.241 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.241 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@60 -- # reactors='{ 00:28:18.241 "lcore": 1, 00:28:18.241 "tid": 3994691, 00:28:18.241 "busy": 1090587358, 00:28:18.241 "idle": 4087737108, 00:28:18.241 "in_interrupt": false, 00:28:18.241 "irq": 1, 00:28:18.241 "sys": 26, 00:28:18.241 "usr": 192, 00:28:18.241 "core_freq": 2300, 00:28:18.242 "lw_threads": [ 00:28:18.242 { 00:28:18.242 "name": "app_thread", 00:28:18.242 "id": 1, 00:28:18.242 "cpumask": "2", 00:28:18.242 "elapsed": 5192483230 00:28:18.242 } 00:28:18.242 ] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 2, 00:28:18.242 "tid": 3994745, 00:28:18.242 "busy": 721896, 00:28:18.242 "idle": 5186717292, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 2, 00:28:18.242 "sys": 2, 00:28:18.242 "usr": 223, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [ 00:28:18.242 { 00:28:18.242 "name": "thread2", 00:28:18.242 "id": 3, 00:28:18.242 "cpumask": "4", 00:28:18.242 "elapsed": 2338731326 00:28:18.242 } 00:28:18.242 ] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 3, 00:28:18.242 "tid": 3994746, 00:28:18.242 "busy": 508604, 00:28:18.242 "idle": 5181604434, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 1, 00:28:18.242 "usr": 222, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [ 00:28:18.242 { 00:28:18.242 "name": "thread3", 00:28:18.242 "id": 4, 00:28:18.242 "cpumask": "8", 00:28:18.242 "elapsed": 2254402596 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread4", 00:28:18.242 "id": 5, 00:28:18.242 "cpumask": "10", 00:28:18.242 "elapsed": 731193712 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread39", 00:28:18.242 "id": 8, 00:28:18.242 "cpumask": "8000000000", 00:28:18.242 "elapsed": 731034622 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread38", 00:28:18.242 "id": 7, 00:28:18.242 "cpumask": "4000000000", 00:28:18.242 "elapsed": 731031540 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread37", 00:28:18.242 "id": 6, 00:28:18.242 "cpumask": "2000000000", 00:28:18.242 "elapsed": 730975266 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread40", 00:28:18.242 "id": 9, 00:28:18.242 "cpumask": "10000000000", 00:28:18.242 "elapsed": 730961138 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread1", 00:28:18.242 "id": 2, 00:28:18.242 "cpumask": "2", 00:28:18.242 "elapsed": 729443578 00:28:18.242 } 00:28:18.242 ] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 4, 00:28:18.242 "tid": 3994747, 00:28:18.242 "busy": 541698, 00:28:18.242 "idle": 5187848110, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 2, 00:28:18.242 "sys": 5, 00:28:18.242 "usr": 219, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 37, 00:28:18.242 "tid": 3994748, 00:28:18.242 "busy": 176254, 00:28:18.242 "idle": 5189195832, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 1, 00:28:18.242 "usr": 224, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 38, 00:28:18.242 "tid": 3994749, 00:28:18.242 "busy": 226648, 00:28:18.242 "idle": 5189796040, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 1, 00:28:18.242 "usr": 224, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 39, 00:28:18.242 "tid": 3994750, 00:28:18.242 "busy": 259050, 00:28:18.242 "idle": 5190448346, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 4, 00:28:18.242 "usr": 221, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 40, 00:28:18.242 "tid": 3994751, 00:28:18.242 "busy": 244994, 00:28:18.242 "idle": 5190988158, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 0, 00:28:18.242 "usr": 225, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [] 00:28:18.242 }' 00:28:18.242 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@61 -- # idle_thread_ids=($(echo "$reactors" | jq -r "select(.lcore == ${scheduling_core}) | .lw_threads" | jq -r '.[].id')) 00:28:18.242 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@61 -- # jq -r 'select(.lcore == 3) | .lw_threads' 00:28:18.242 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@61 -- # jq -r '.[].id' 00:28:18.242 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@61 -- # echo '{ 00:28:18.242 "lcore": 1, 00:28:18.242 "tid": 3994691, 00:28:18.242 "busy": 1090587358, 00:28:18.242 "idle": 4087737108, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 26, 00:28:18.242 "usr": 192, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [ 00:28:18.242 { 00:28:18.242 "name": "app_thread", 00:28:18.242 "id": 1, 00:28:18.242 "cpumask": "2", 00:28:18.242 "elapsed": 5192483230 00:28:18.242 } 00:28:18.242 ] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 2, 00:28:18.242 "tid": 3994745, 00:28:18.242 "busy": 721896, 00:28:18.242 "idle": 5186717292, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 2, 00:28:18.242 "sys": 2, 00:28:18.242 "usr": 223, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [ 00:28:18.242 { 00:28:18.242 "name": "thread2", 00:28:18.242 "id": 3, 00:28:18.242 "cpumask": "4", 00:28:18.242 "elapsed": 2338731326 00:28:18.242 } 00:28:18.242 ] 00:28:18.242 } 00:28:18.242 { 00:28:18.242 "lcore": 3, 00:28:18.242 "tid": 3994746, 00:28:18.242 "busy": 508604, 00:28:18.242 "idle": 5181604434, 00:28:18.242 "in_interrupt": false, 00:28:18.242 "irq": 1, 00:28:18.242 "sys": 1, 00:28:18.242 "usr": 222, 00:28:18.242 "core_freq": 2300, 00:28:18.242 "lw_threads": [ 00:28:18.242 { 00:28:18.242 "name": "thread3", 00:28:18.242 "id": 4, 00:28:18.242 "cpumask": "8", 00:28:18.242 "elapsed": 2254402596 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread4", 00:28:18.242 "id": 5, 00:28:18.242 "cpumask": "10", 00:28:18.242 "elapsed": 731193712 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread39", 00:28:18.242 "id": 8, 00:28:18.242 "cpumask": "8000000000", 00:28:18.242 "elapsed": 731034622 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread38", 00:28:18.242 "id": 7, 00:28:18.242 "cpumask": "4000000000", 00:28:18.242 "elapsed": 731031540 00:28:18.242 }, 00:28:18.242 { 00:28:18.242 "name": "thread37", 00:28:18.242 "id": 6, 00:28:18.242 "cpumask": "2000000000", 00:28:18.242 "elapsed": 730975266 00:28:18.242 }, 00:28:18.242 { 00:28:18.243 "name": "thread40", 00:28:18.243 "id": 9, 00:28:18.243 "cpumask": "10000000000", 00:28:18.243 "elapsed": 730961138 00:28:18.243 }, 00:28:18.243 { 00:28:18.243 "name": "thread1", 00:28:18.243 "id": 2, 00:28:18.243 "cpumask": "2", 00:28:18.243 "elapsed": 729443578 00:28:18.243 } 00:28:18.243 ] 00:28:18.243 } 00:28:18.243 { 00:28:18.243 "lcore": 4, 00:28:18.243 "tid": 3994747, 00:28:18.243 "busy": 541698, 00:28:18.243 "idle": 5187848110, 00:28:18.243 "in_interrupt": false, 00:28:18.243 "irq": 2, 00:28:18.243 "sys": 5, 00:28:18.243 "usr": 219, 00:28:18.243 "core_freq": 2300, 00:28:18.243 "lw_threads": [] 00:28:18.243 } 00:28:18.243 { 00:28:18.243 "lcore": 37, 00:28:18.243 "tid": 3994748, 00:28:18.243 "busy": 176254, 00:28:18.243 "idle": 5189195832, 00:28:18.243 "in_interrupt": false, 00:28:18.243 "irq": 1, 00:28:18.243 "sys": 1, 00:28:18.243 "usr": 224, 00:28:18.243 "core_freq": 2300, 00:28:18.243 "lw_threads": [] 00:28:18.243 } 00:28:18.243 { 00:28:18.243 "lcore": 38, 00:28:18.243 "tid": 3994749, 00:28:18.243 "busy": 226648, 00:28:18.243 "idle": 5189796040, 00:28:18.243 "in_interrupt": false, 00:28:18.243 "irq": 1, 00:28:18.243 "sys": 1, 00:28:18.243 "usr": 224, 00:28:18.243 "core_freq": 2300, 00:28:18.243 "lw_threads": [] 00:28:18.243 } 00:28:18.243 { 00:28:18.243 "lcore": 39, 00:28:18.243 "tid": 3994750, 00:28:18.243 "busy": 259050, 00:28:18.243 "idle": 5190448346, 00:28:18.243 "in_interrupt": false, 00:28:18.243 "irq": 1, 00:28:18.243 "sys": 4, 00:28:18.243 "usr": 221, 00:28:18.243 "core_freq": 2300, 00:28:18.243 "lw_threads": [] 00:28:18.243 } 00:28:18.243 { 00:28:18.243 "lcore": 40, 00:28:18.243 "tid": 3994751, 00:28:18.243 "busy": 244994, 00:28:18.243 "idle": 5190988158, 00:28:18.243 "in_interrupt": false, 00:28:18.243 "irq": 1, 00:28:18.243 "sys": 0, 00:28:18.243 "usr": 225, 00:28:18.243 "core_freq": 2300, 00:28:18.243 "lw_threads": [] 00:28:18.243 }' 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 4 80 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 4 80 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 5 80 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 5 80 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 8 80 00:28:18.243 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 8 80 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.243 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 7 80 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 7 80 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 6 80 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 6 80 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 9 80 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 9 80 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@63 -- # for thread_id in "${idle_thread_ids[@]}" 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@64 -- # (( thread_id == 1 )) 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@67 -- # active_thread 2 80 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/common.sh@496 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 2 80 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:18.502 13:59:49 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.502 13:59:49 scheduler.core_isolating -- scheduler/core_isolating.sh@69 -- # sleep 20 00:28:40.435 14:00:09 scheduler.core_isolating -- scheduler/core_isolating.sh@71 -- # rpc_cmd framework_get_reactors 00:28:40.435 14:00:09 scheduler.core_isolating -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.435 14:00:09 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 14:00:09 scheduler.core_isolating -- scheduler/core_isolating.sh@71 -- # jq -r '.reactors[]' 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@71 -- # reactors='{ 00:28:40.435 "lcore": 1, 00:28:40.435 "tid": 3994691, 00:28:40.435 "busy": 3001108938, 00:28:40.435 "idle": 34874826610, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 5, 00:28:40.435 "sys": 27, 00:28:40.435 "usr": 1612, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread1", 00:28:40.435 "id": 2, 00:28:40.435 "cpumask": "2", 00:28:40.435 "elapsed": 2300259720 00:28:40.435 }, 00:28:40.435 { 00:28:40.435 "name": "app_thread", 00:28:40.435 "id": 1, 00:28:40.435 "cpumask": "2", 00:28:40.435 "elapsed": 1656042538 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 2, 00:28:40.435 "tid": 3994745, 00:28:40.435 "busy": 44146214232, 00:28:40.435 "idle": 7693841714, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 6, 00:28:40.435 "sys": 2, 00:28:40.435 "usr": 2251, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread2", 00:28:40.435 "id": 3, 00:28:40.435 "cpumask": "4", 00:28:40.435 "elapsed": 48840634266 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 3, 00:28:40.435 "tid": 3994746, 00:28:40.435 "busy": 45647602590, 00:28:40.435 "idle": 6216828316, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 7, 00:28:40.435 "sys": 2, 00:28:40.435 "usr": 2251, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread3", 00:28:40.435 "id": 4, 00:28:40.435 "cpumask": "8", 00:28:40.435 "elapsed": 48756305536 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 4, 00:28:40.435 "tid": 3994747, 00:28:40.435 "busy": 9944717596, 00:28:40.435 "idle": 18910363192, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 6, 00:28:40.435 "sys": 8, 00:28:40.435 "usr": 1247, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread4", 00:28:40.435 "id": 5, 00:28:40.435 "cpumask": "10", 00:28:40.435 "elapsed": 12055344978 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 37, 00:28:40.435 "tid": 3994748, 00:28:40.435 "busy": 5153118774, 00:28:40.435 "idle": 20298500806, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 3, 00:28:40.435 "sys": 1, 00:28:40.435 "usr": 1105, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread37", 00:28:40.435 "id": 6, 00:28:40.435 "cpumask": "2000000000", 00:28:40.435 "elapsed": 5981483214 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 38, 00:28:40.435 "tid": 3994749, 00:28:40.435 "busy": 5337185812, 00:28:40.435 "idle": 22185262048, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 3, 00:28:40.435 "sys": 1, 00:28:40.435 "usr": 1195, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread38", 00:28:40.435 "id": 7, 00:28:40.435 "cpumask": "4000000000", 00:28:40.435 "elapsed": 6165534412 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 39, 00:28:40.435 "tid": 3994750, 00:28:40.435 "busy": 9938012966, 00:28:40.435 "idle": 28120204352, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 5, 00:28:40.435 "sys": 4, 00:28:40.435 "usr": 1651, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread39", 00:28:40.435 "id": 8, 00:28:40.435 "cpumask": "8000000000", 00:28:40.435 "elapsed": 11871275270 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 } 00:28:40.435 { 00:28:40.435 "lcore": 40, 00:28:40.435 "tid": 3994751, 00:28:40.435 "busy": 2392665404, 00:28:40.435 "idle": 28487552838, 00:28:40.435 "in_interrupt": false, 00:28:40.435 "irq": 6, 00:28:40.435 "sys": 1, 00:28:40.435 "usr": 1341, 00:28:40.435 "core_freq": 2300, 00:28:40.435 "lw_threads": [ 00:28:40.435 { 00:28:40.435 "name": "thread40", 00:28:40.435 "id": 9, 00:28:40.435 "cpumask": "10000000000", 00:28:40.435 "elapsed": 2484322234 00:28:40.435 } 00:28:40.435 ] 00:28:40.435 }' 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@72 -- # jq -r 'select(.lcore == 2) | .lw_threads | length' 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@72 -- # tmp_count=1 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@73 -- # (( isolated_thread_count == tmp_count )) 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@75 -- # jq -r 'select(.lcore == 3) | .lw_threads| length' 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@75 -- # tmp_count=1 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@76 -- # (( idle_thread_count >= tmp_count )) 00:28:40.435 14:00:10 scheduler.core_isolating -- scheduler/core_isolating.sh@1 -- # killprocess 3994691 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@954 -- # '[' -z 3994691 ']' 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@958 -- # kill -0 3994691 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@959 -- # uname 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994691 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994691' 00:28:40.435 killing process with pid 3994691 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@973 -- # kill 3994691 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@978 -- # wait 3994691 00:28:40.435 [2024-12-05 14:00:10.542112] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:28:40.435 00:28:40.435 real 0m24.224s 00:28:40.435 user 2m14.247s 00:28:40.435 sys 0m0.798s 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.435 14:00:10 scheduler.core_isolating -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 ************************************ 00:28:40.435 END TEST core_isolating 00:28:40.435 ************************************ 00:28:40.435 14:00:10 scheduler -- scheduler/scheduler.sh@1 -- # restore_cgroups 00:28:40.435 14:00:10 scheduler -- scheduler/isolate_cores.sh@11 -- # xtrace_disable 00:28:40.435 14:00:10 scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 Moving 3973593 (PF_SUPERPRIV,PF_RANDOMIZE) to / from /cpuset 00:28:40.435 Moved 1 processes, failed 0 00:28:40.435 00:28:40.435 real 2m38.093s 00:28:40.435 user 7m44.247s 00:28:40.435 sys 0m17.947s 00:28:40.435 14:00:11 scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.435 14:00:11 scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 ************************************ 00:28:40.435 END TEST scheduler 00:28:40.435 ************************************ 00:28:40.435 14:00:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:40.435 14:00:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:40.435 14:00:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:40.435 14:00:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:40.435 14:00:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:40.435 14:00:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.435 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:28:40.435 14:00:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:40.435 14:00:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:40.435 14:00:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:40.435 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:28:44.619 INFO: APP EXITING 00:28:44.619 INFO: killing all VMs 00:28:44.619 INFO: killing vhost app 00:28:44.619 INFO: EXIT DONE 00:28:47.160 Waiting for block devices as requested 00:28:47.160 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:47.160 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:47.160 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:47.419 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:47.419 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:47.419 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:47.678 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:47.678 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:47.678 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:47.937 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:47.937 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:47.937 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:48.196 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:48.196 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:48.196 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:48.455 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:48.455 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:52.640 Cleaning 00:28:52.640 Removing: /var/run/dpdk/spdk0/config 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:52.640 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:52.640 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:52.640 Removing: /dev/shm/bdevperf_trace.pid3965900 00:28:52.640 Removing: /dev/shm/spdk_tgt_trace.pid3836274 00:28:52.640 Removing: /var/run/dpdk/spdk0 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3833765 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3835056 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3836274 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3836815 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3837702 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3837882 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3838647 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3838715 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3839172 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3839429 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3839817 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3840078 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3840324 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3840525 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3840718 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3840955 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3841693 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3844328 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3844666 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3844984 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3845164 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3845897 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3845911 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3846645 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3846814 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3847178 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3847203 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3847417 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3847591 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3848056 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3848250 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3848584 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3849067 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3849273 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3849520 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3849790 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3850116 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3850402 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3850692 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3851033 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3851282 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3851649 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3851863 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3852235 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3852450 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3852827 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3853031 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3853418 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3853638 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3853998 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3854206 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3854569 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3854820 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3855151 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3855406 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3855716 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3856019 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3856430 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3856780 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3857270 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3858356 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3859392 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3862527 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3864136 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3865737 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3866821 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3866998 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3867081 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3871583 00:28:52.640 Removing: /var/run/dpdk/spdk_pid3872491 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3875292 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3877013 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3878656 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3879735 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3879917 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3879939 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3893713 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3895184 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3896051 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3896852 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3898651 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3904093 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3908503 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3915507 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3921644 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3928560 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3929827 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3940685 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3951228 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3954668 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3956770 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3957074 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3960726 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3963448 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3964320 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3965049 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3965900 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3966205 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3967469 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3968516 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3969288 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3970432 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3970629 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3970884 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3974934 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3975306 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3975879 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3976329 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3982341 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3985901 00:28:52.899 Removing: /var/run/dpdk/spdk_pid3994691 00:28:52.899 Clean 00:28:53.157 14:00:24 -- common/autotest_common.sh@1453 -- # return 0 00:28:53.157 14:00:24 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:53.157 14:00:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.157 14:00:24 -- common/autotest_common.sh@10 -- # set +x 00:28:53.157 14:00:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:53.157 14:00:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.157 14:00:24 -- common/autotest_common.sh@10 -- # set +x 00:28:53.157 14:00:24 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt 00:28:53.157 14:00:24 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log ]] 00:28:53.157 14:00:24 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log 00:28:53.157 14:00:24 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:53.157 14:00:24 -- spdk/autotest.sh@398 -- # hostname 00:28:53.157 14:00:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvme-phy-autotest/spdk -t spdk-wfp-46 -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info 00:28:53.417 geninfo: WARNING: invalid characters removed from testname! 00:29:25.486 14:00:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:29:26.054 14:00:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:29:27.958 14:00:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:29:29.863 14:01:01 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:29:31.767 14:01:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:29:33.669 14:01:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:29:35.042 14:01:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:35.042 14:01:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:35.042 14:01:06 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt ]] 00:29:35.042 14:01:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:35.042 14:01:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:35.042 14:01:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt 00:29:35.042 + [[ -n 3738565 ]] 00:29:35.042 + sudo kill 3738565 00:29:35.310 [Pipeline] } 00:29:35.323 [Pipeline] // stage 00:29:35.328 [Pipeline] } 00:29:35.341 [Pipeline] // timeout 00:29:35.346 [Pipeline] } 00:29:35.357 [Pipeline] // catchError 00:29:35.363 [Pipeline] } 00:29:35.376 [Pipeline] // wrap 00:29:35.382 [Pipeline] } 00:29:35.394 [Pipeline] // catchError 00:29:35.402 [Pipeline] stage 00:29:35.404 [Pipeline] { (Epilogue) 00:29:35.415 [Pipeline] catchError 00:29:35.417 [Pipeline] { 00:29:35.429 [Pipeline] echo 00:29:35.432 Cleanup processes 00:29:35.438 [Pipeline] sh 00:29:35.781 + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:29:35.781 4006271 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:29:35.794 [Pipeline] sh 00:29:36.076 ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:29:36.076 ++ grep -v 'sudo pgrep' 00:29:36.076 ++ awk '{print $1}' 00:29:36.076 + sudo kill -9 00:29:36.076 + true 00:29:36.087 [Pipeline] sh 00:29:36.368 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:46.483 [Pipeline] sh 00:29:46.768 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:46.768 Artifacts sizes are good 00:29:46.783 [Pipeline] archiveArtifacts 00:29:46.791 Archiving artifacts 00:29:46.910 [Pipeline] sh 00:29:47.195 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvme-phy-autotest 00:29:47.212 [Pipeline] cleanWs 00:29:47.221 [WS-CLEANUP] Deleting project workspace... 00:29:47.221 [WS-CLEANUP] Deferred wipeout is used... 00:29:47.228 [WS-CLEANUP] done 00:29:47.230 [Pipeline] } 00:29:47.249 [Pipeline] // catchError 00:29:47.260 [Pipeline] sh 00:29:47.541 + logger -p user.info -t JENKINS-CI 00:29:47.550 [Pipeline] } 00:29:47.562 [Pipeline] // stage 00:29:47.568 [Pipeline] } 00:29:47.582 [Pipeline] // node 00:29:47.587 [Pipeline] End of Pipeline 00:29:47.634 Finished: SUCCESS