00:00:00.000 Started by upstream project "autotest-per-patch" build number 132811 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:08.749 The recommended git tool is: git 00:00:08.749 using credential 00000000-0000-0000-0000-000000000002 00:00:08.751 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:08.763 Fetching changes from the remote Git repository 00:00:08.766 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:08.782 Using shallow fetch with depth 1 00:00:08.782 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:08.782 > git --version # timeout=10 00:00:08.794 > git --version # 'git version 2.39.2' 00:00:08.794 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:08.812 Setting http proxy: proxy-dmz.intel.com:911 00:00:08.812 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:15.718 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:15.731 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:15.744 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:15.744 > git config core.sparsecheckout # timeout=10 00:00:15.755 > git read-tree -mu HEAD # timeout=10 00:00:15.771 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:15.788 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:15.788 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:15.895 [Pipeline] Start of Pipeline 00:00:15.911 [Pipeline] library 00:00:15.913 Loading library shm_lib@master 00:00:15.913 Library shm_lib@master is cached. Copying from home. 00:00:15.929 [Pipeline] node 00:00:15.942 Running on GP9 in /var/jenkins/workspace/nvme-phy-autotest 00:00:15.944 [Pipeline] { 00:00:15.952 [Pipeline] catchError 00:00:15.953 [Pipeline] { 00:00:15.964 [Pipeline] wrap 00:00:15.971 [Pipeline] { 00:00:15.979 [Pipeline] stage 00:00:15.980 [Pipeline] { (Prologue) 00:00:16.221 [Pipeline] sh 00:00:16.504 + logger -p user.info -t JENKINS-CI 00:00:16.525 [Pipeline] echo 00:00:16.527 Node: GP9 00:00:16.535 [Pipeline] sh 00:00:16.826 [Pipeline] setCustomBuildProperty 00:00:16.834 [Pipeline] echo 00:00:16.835 Cleanup processes 00:00:16.839 [Pipeline] sh 00:00:17.116 + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:17.116 367485 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:17.130 [Pipeline] sh 00:00:17.407 ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:17.407 ++ awk '{print $1}' 00:00:17.407 ++ grep -v 'sudo pgrep' 00:00:17.407 + sudo kill -9 00:00:17.407 + true 00:00:17.419 [Pipeline] cleanWs 00:00:17.428 [WS-CLEANUP] Deleting project workspace... 00:00:17.428 [WS-CLEANUP] Deferred wipeout is used... 00:00:17.433 [WS-CLEANUP] done 00:00:17.438 [Pipeline] setCustomBuildProperty 00:00:17.453 [Pipeline] sh 00:00:17.730 + sudo git config --global --replace-all safe.directory '*' 00:00:17.835 [Pipeline] httpRequest 00:00:18.226 [Pipeline] echo 00:00:18.228 Sorcerer 10.211.164.112 is alive 00:00:18.237 [Pipeline] retry 00:00:18.239 [Pipeline] { 00:00:18.251 [Pipeline] httpRequest 00:00:18.255 HttpMethod: GET 00:00:18.255 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.256 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.262 Response Code: HTTP/1.1 200 OK 00:00:18.262 Success: Status code 200 is in the accepted range: 200,404 00:00:18.263 Saving response body to /var/jenkins/workspace/nvme-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.202 [Pipeline] } 00:00:47.220 [Pipeline] // retry 00:00:47.227 [Pipeline] sh 00:00:47.507 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.780 [Pipeline] httpRequest 00:00:48.208 [Pipeline] echo 00:00:48.210 Sorcerer 10.211.164.112 is alive 00:00:48.219 [Pipeline] retry 00:00:48.221 [Pipeline] { 00:00:48.234 [Pipeline] httpRequest 00:00:48.239 HttpMethod: GET 00:00:48.239 URL: http://10.211.164.112/packages/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:00:48.239 Sending request to url: http://10.211.164.112/packages/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:00:48.266 Response Code: HTTP/1.1 200 OK 00:00:48.266 Success: Status code 200 is in the accepted range: 200,404 00:00:48.266 Saving response body to /var/jenkins/workspace/nvme-phy-autotest/spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:07:00.580 [Pipeline] } 00:07:00.597 [Pipeline] // retry 00:07:00.603 [Pipeline] sh 00:07:00.883 + tar --no-same-owner -xf spdk_1ae735a5d13f736acb1895cd8146266345791321.tar.gz 00:07:05.076 [Pipeline] sh 00:07:05.355 + git -C spdk log --oneline -n5 00:07:05.355 1ae735a5d nvme: add poll_group interrupt callback 00:07:05.355 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:07:05.355 969b360d9 thread: fd_group-based interrupts 00:07:05.355 851f166ec thread: move interrupt allocation to a function 00:07:05.355 c12cb8fe3 util: add method for setting fd_group's wrapper 00:07:05.364 [Pipeline] } 00:07:05.378 [Pipeline] // stage 00:07:05.386 [Pipeline] stage 00:07:05.388 [Pipeline] { (Prepare) 00:07:05.404 [Pipeline] writeFile 00:07:05.421 [Pipeline] sh 00:07:05.703 + logger -p user.info -t JENKINS-CI 00:07:05.715 [Pipeline] sh 00:07:05.997 + logger -p user.info -t JENKINS-CI 00:07:06.038 [Pipeline] sh 00:07:06.320 + cat autorun-spdk.conf 00:07:06.320 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:06.320 SPDK_TEST_IOAT=1 00:07:06.320 SPDK_TEST_NVME=1 00:07:06.320 SPDK_TEST_NVME_CLI=1 00:07:06.321 SPDK_TEST_OCF=1 00:07:06.321 SPDK_RUN_UBSAN=1 00:07:06.321 SPDK_TEST_NVME_CUSE=1 00:07:06.321 SPDK_TEST_SCHEDULER=1 00:07:06.321 SPDK_TEST_ACCEL=1 00:07:06.321 SPDK_TEST_NVME_INTERRUPT=1 00:07:06.327 RUN_NIGHTLY=0 00:07:06.331 [Pipeline] readFile 00:07:06.352 [Pipeline] withEnv 00:07:06.354 [Pipeline] { 00:07:06.367 [Pipeline] sh 00:07:06.651 + set -ex 00:07:06.651 + [[ -f /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf ]] 00:07:06.651 + source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:07:06.651 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:06.651 ++ SPDK_TEST_IOAT=1 00:07:06.651 ++ SPDK_TEST_NVME=1 00:07:06.651 ++ SPDK_TEST_NVME_CLI=1 00:07:06.651 ++ SPDK_TEST_OCF=1 00:07:06.651 ++ SPDK_RUN_UBSAN=1 00:07:06.651 ++ SPDK_TEST_NVME_CUSE=1 00:07:06.651 ++ SPDK_TEST_SCHEDULER=1 00:07:06.651 ++ SPDK_TEST_ACCEL=1 00:07:06.651 ++ SPDK_TEST_NVME_INTERRUPT=1 00:07:06.651 ++ RUN_NIGHTLY=0 00:07:06.651 + case $SPDK_TEST_NVMF_NICS in 00:07:06.651 + DRIVERS= 00:07:06.651 + [[ -n '' ]] 00:07:06.651 + exit 0 00:07:06.688 [Pipeline] } 00:07:06.705 [Pipeline] // withEnv 00:07:06.708 [Pipeline] } 00:07:06.717 [Pipeline] // stage 00:07:06.723 [Pipeline] catchError 00:07:06.724 [Pipeline] { 00:07:06.731 [Pipeline] timeout 00:07:06.732 Timeout set to expire in 40 min 00:07:06.733 [Pipeline] { 00:07:06.741 [Pipeline] stage 00:07:06.742 [Pipeline] { (Tests) 00:07:06.750 [Pipeline] sh 00:07:07.027 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvme-phy-autotest 00:07:07.027 ++ readlink -f /var/jenkins/workspace/nvme-phy-autotest 00:07:07.027 + DIR_ROOT=/var/jenkins/workspace/nvme-phy-autotest 00:07:07.027 + [[ -n /var/jenkins/workspace/nvme-phy-autotest ]] 00:07:07.027 + DIR_SPDK=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:07:07.027 + DIR_OUTPUT=/var/jenkins/workspace/nvme-phy-autotest/output 00:07:07.027 + [[ -d /var/jenkins/workspace/nvme-phy-autotest/spdk ]] 00:07:07.027 + [[ ! -d /var/jenkins/workspace/nvme-phy-autotest/output ]] 00:07:07.027 + mkdir -p /var/jenkins/workspace/nvme-phy-autotest/output 00:07:07.027 + [[ -d /var/jenkins/workspace/nvme-phy-autotest/output ]] 00:07:07.027 + [[ nvme-phy-autotest == pkgdep-* ]] 00:07:07.027 + cd /var/jenkins/workspace/nvme-phy-autotest 00:07:07.027 + source /etc/os-release 00:07:07.027 ++ NAME='Fedora Linux' 00:07:07.027 ++ VERSION='39 (Cloud Edition)' 00:07:07.027 ++ ID=fedora 00:07:07.027 ++ VERSION_ID=39 00:07:07.027 ++ VERSION_CODENAME= 00:07:07.027 ++ PLATFORM_ID=platform:f39 00:07:07.027 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:07.027 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:07.027 ++ LOGO=fedora-logo-icon 00:07:07.027 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:07.027 ++ HOME_URL=https://fedoraproject.org/ 00:07:07.027 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:07.027 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:07.027 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:07.027 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:07.027 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:07.027 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:07.027 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:07.027 ++ SUPPORT_END=2024-11-12 00:07:07.027 ++ VARIANT='Cloud Edition' 00:07:07.027 ++ VARIANT_ID=cloud 00:07:07.027 + uname -a 00:07:07.027 Linux spdk-gp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:07.027 + sudo /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:07:08.402 Hugepages 00:07:08.402 node hugesize free / total 00:07:08.402 node0 1048576kB 0 / 0 00:07:08.402 node0 2048kB 0 / 0 00:07:08.402 node1 1048576kB 0 / 0 00:07:08.402 node1 2048kB 0 / 0 00:07:08.402 00:07:08.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:08.402 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:07:08.402 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:07:08.402 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:07:08.402 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:08.402 + rm -f /tmp/spdk-ld-path 00:07:08.402 + source autorun-spdk.conf 00:07:08.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:08.402 ++ SPDK_TEST_IOAT=1 00:07:08.402 ++ SPDK_TEST_NVME=1 00:07:08.402 ++ SPDK_TEST_NVME_CLI=1 00:07:08.402 ++ SPDK_TEST_OCF=1 00:07:08.402 ++ SPDK_RUN_UBSAN=1 00:07:08.402 ++ SPDK_TEST_NVME_CUSE=1 00:07:08.402 ++ SPDK_TEST_SCHEDULER=1 00:07:08.402 ++ SPDK_TEST_ACCEL=1 00:07:08.402 ++ SPDK_TEST_NVME_INTERRUPT=1 00:07:08.402 ++ RUN_NIGHTLY=0 00:07:08.402 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:08.402 + [[ -n '' ]] 00:07:08.402 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvme-phy-autotest/spdk 00:07:08.402 + for M in /var/spdk/build-*-manifest.txt 00:07:08.402 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:08.402 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:07:08.402 + for M in /var/spdk/build-*-manifest.txt 00:07:08.402 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:08.402 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:07:08.402 + for M in /var/spdk/build-*-manifest.txt 00:07:08.402 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:08.402 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:07:08.402 ++ uname 00:07:08.402 + [[ Linux == \L\i\n\u\x ]] 00:07:08.402 + sudo dmesg -T 00:07:08.660 + sudo dmesg --clear 00:07:08.660 + dmesg_pid=370001 00:07:08.660 + [[ Fedora Linux == FreeBSD ]] 00:07:08.660 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.660 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.660 + sudo dmesg -Tw 00:07:08.660 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:08.660 + [[ -x /usr/src/fio-static/fio ]] 00:07:08.660 + export FIO_BIN=/usr/src/fio-static/fio 00:07:08.660 + FIO_BIN=/usr/src/fio-static/fio 00:07:08.660 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\e\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:08.660 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:08.660 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:08.660 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.660 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.660 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:08.660 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.660 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.661 + spdk/autorun.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:07:08.661 23:48:47 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:08.661 23:48:47 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_IOAT=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVME_CLI=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_OCF=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_TEST_NVME_CUSE=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@8 -- $ SPDK_TEST_SCHEDULER=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_ACCEL=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_TEST_NVME_INTERRUPT=1 00:07:08.661 23:48:47 -- nvme-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=0 00:07:08.661 23:48:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:08.661 23:48:47 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:07:08.661 23:48:47 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:08.661 23:48:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:07:08.661 23:48:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:08.661 23:48:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:08.661 23:48:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.661 23:48:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.661 23:48:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.661 23:48:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.661 23:48:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.661 23:48:47 -- paths/export.sh@5 -- $ export PATH 00:07:08.661 23:48:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.661 23:48:47 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:07:08.661 23:48:47 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:08.661 23:48:47 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784527.XXXXXX 00:07:08.661 23:48:47 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784527.4bCbX2 00:07:08.661 23:48:47 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:08.661 23:48:47 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:08.661 23:48:47 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/' 00:07:08.661 23:48:47 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:08.661 23:48:47 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:08.661 23:48:47 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:08.661 23:48:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:08.661 23:48:47 -- common/autotest_common.sh@10 -- $ set +x 00:07:08.661 23:48:47 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk' 00:07:08.661 23:48:47 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:08.661 23:48:47 -- pm/common@17 -- $ local monitor 00:07:08.661 23:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:08.661 23:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:08.661 23:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:08.661 23:48:47 -- pm/common@21 -- $ date +%s 00:07:08.661 23:48:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:08.661 23:48:47 -- pm/common@21 -- $ date +%s 00:07:08.661 23:48:47 -- pm/common@25 -- $ sleep 1 00:07:08.661 23:48:47 -- pm/common@21 -- $ date +%s 00:07:08.661 23:48:47 -- pm/common@21 -- $ date +%s 00:07:08.661 23:48:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784527 00:07:08.661 23:48:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784527 00:07:08.661 23:48:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784527 00:07:08.661 23:48:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784527 00:07:08.661 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784527_collect-vmstat.pm.log 00:07:08.661 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784527_collect-cpu-load.pm.log 00:07:08.661 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784527_collect-cpu-temp.pm.log 00:07:08.661 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784527_collect-bmc-pm.bmc.pm.log 00:07:09.595 23:48:48 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:09.595 23:48:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:09.595 23:48:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:09.595 23:48:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:07:09.595 23:48:48 -- spdk/autobuild.sh@16 -- $ date -u 00:07:09.595 Mon Dec 9 10:48:48 PM UTC 2024 00:07:09.595 23:48:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:09.853 v25.01-pre-320-g1ae735a5d 00:07:09.853 23:48:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:09.853 23:48:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:09.853 23:48:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:09.853 23:48:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:09.853 23:48:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:09.853 23:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:07:09.853 ************************************ 00:07:09.853 START TEST ubsan 00:07:09.853 ************************************ 00:07:09.853 23:48:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:09.853 using ubsan 00:07:09.853 00:07:09.853 real 0m0.000s 00:07:09.853 user 0m0.000s 00:07:09.853 sys 0m0.000s 00:07:09.853 23:48:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:09.853 23:48:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:09.853 ************************************ 00:07:09.853 END TEST ubsan 00:07:09.853 ************************************ 00:07:09.853 23:48:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:09.853 23:48:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:09.853 23:48:48 -- spdk/autobuild.sh@47 -- $ [[ 1 -eq 1 ]] 00:07:09.853 23:48:48 -- spdk/autobuild.sh@48 -- $ ocf_precompile 00:07:09.853 23:48:48 -- common/autobuild_common.sh@441 -- $ run_test autobuild_ocf_precompile _ocf_precompile 00:07:09.853 23:48:48 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:07:09.853 23:48:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:09.853 23:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:07:09.853 ************************************ 00:07:09.853 START TEST autobuild_ocf_precompile 00:07:09.853 ************************************ 00:07:09.853 23:48:48 autobuild_ocf_precompile -- common/autotest_common.sh@1129 -- $ _ocf_precompile 00:07:09.853 23:48:48 autobuild_ocf_precompile -- common/autobuild_common.sh@21 -- $ echo --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk 00:07:09.853 23:48:48 autobuild_ocf_precompile -- common/autobuild_common.sh@21 -- $ sed s/--enable-coverage//g 00:07:09.853 23:48:48 autobuild_ocf_precompile -- common/autobuild_common.sh@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --with-ublk 00:07:09.853 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:07:09.853 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:07:10.111 Using 'verbs' RDMA provider 00:07:22.868 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:37.762 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:37.762 Creating mk/config.mk...done. 00:07:37.762 Creating mk/cc.flags.mk...done. 00:07:37.762 Type 'make' to build. 00:07:37.762 23:49:14 autobuild_ocf_precompile -- common/autobuild_common.sh@22 -- $ make -j48 include/spdk/config.h 00:07:37.762 23:49:14 autobuild_ocf_precompile -- common/autobuild_common.sh@23 -- $ CC=gcc 00:07:37.762 23:49:14 autobuild_ocf_precompile -- common/autobuild_common.sh@23 -- $ CCAR=ar 00:07:37.762 23:49:14 autobuild_ocf_precompile -- common/autobuild_common.sh@23 -- $ make -j48 -C /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf exportlib O=/var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:07:37.762 make: Entering directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf' 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/acp.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/alru.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cache.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io_class.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_def.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_core.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_debug.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_mngt.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_metadata.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_volume.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_logger.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cleaner.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_err.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_composite_volume.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_types.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/promotion/nhit.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cfg.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_ctx.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_queue.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_stats.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp_structs.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_ops.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru_structs.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop_structs.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru_structs.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io_allocator.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_stats.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_builder.c 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx_priv.h 00:07:37.762 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_def_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/ops.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_structs.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_debug.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_flush.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_io.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_flush.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_io.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_io_class.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_misc.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_cache.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_flush.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_part.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue_priv.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_class.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_metadata.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment_id.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_internal.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cache_line.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_structs.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_status.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_common.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.c 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.h 00:07:37.763 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.c 00:07:37.764 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_bit.h 00:07:37.764 CC env_ocf/mpool.o 00:07:37.764 CC env_ocf/ocf_env.o 00:07:37.764 CC env_ocf/src/ocf/cleaning/acp.o 00:07:37.764 CC env_ocf/src/ocf/cleaning/alru.o 00:07:37.764 CC env_ocf/src/ocf/cleaning/nop.o 00:07:37.764 CC env_ocf/src/ocf/cleaning/cleaning.o 00:07:37.764 CC env_ocf/src/ocf/ocf_logger.o 00:07:37.764 CC env_ocf/src/ocf/ocf_stats.o 00:07:37.764 CC env_ocf/src/ocf/ocf_io.o 00:07:37.764 CC env_ocf/src/ocf/ocf_lru.o 00:07:37.764 CC env_ocf/src/ocf/ocf_cache.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_cache_line.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_request.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_alock.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_realloc.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_list.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_refcnt.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_parallelize.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_user_part.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_rbtree.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_cleaner.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_io.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_generator.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_pipeline.o 00:07:37.764 CC env_ocf/src/ocf/utils/utils_async_lock.o 00:07:37.764 CC env_ocf/src/ocf/ocf_space.o 00:07:37.764 CC env_ocf/src/ocf/ocf_stats_builder.o 00:07:37.764 CC env_ocf/src/ocf/concurrency/ocf_concurrency.o 00:07:37.764 CC env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.o 00:07:37.764 CC env_ocf/src/ocf/concurrency/ocf_mio_concurrency.o 00:07:37.764 CC env_ocf/src/ocf/concurrency/ocf_pio_concurrency.o 00:07:37.764 CC env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.o 00:07:37.764 CC env_ocf/src/ocf/ocf_seq_cutoff.o 00:07:37.764 CC env_ocf/src/ocf/ocf_composite_volume.o 00:07:37.764 CC env_ocf/src/ocf/promotion/nhit/nhit_hash.o 00:07:37.764 CC env_ocf/src/ocf/ocf_request.o 00:07:37.764 CC env_ocf/src/ocf/promotion/nhit/nhit.o 00:07:37.764 CC env_ocf/src/ocf/promotion/promotion.o 00:07:37.764 CC env_ocf/src/ocf/ocf_queue.o 00:07:37.764 CC env_ocf/src/ocf/ocf_volume.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_wa.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_fast.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_bf.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_wt.o 00:07:37.764 CC env_ocf/src/ocf/engine/cache_engine.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_common.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_zero.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_io.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_d2c.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_pt.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_flush.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_inv.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_discard.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_rd.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_wo.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_wb.o 00:07:37.764 CC env_ocf/src/ocf/engine/engine_wi.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_io_class.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_misc.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_core.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_cache.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_flush.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_common.o 00:07:37.764 CC env_ocf/src/ocf/mngt/ocf_mngt_core_pool.o 00:07:37.764 CC env_ocf/src/ocf/ocf_core.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_misc.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_raw.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_core.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_collision.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_segment.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_superblock.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_eviction_policy.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_io.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_passive_update.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_raw_volatile.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_partition.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_raw_atomic.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_raw_dynamic.o 00:07:37.764 CC env_ocf/src/ocf/metadata/metadata_cleaning_policy.o 00:07:37.764 CC env_ocf/src/ocf/ocf_ctx.o 00:07:37.764 CC env_ocf/src/ocf/ocf_metadata.o 00:07:37.764 CC env_ocf/src/ocf/ocf_io_class.o 00:07:38.330 LIB libspdk_ocfenv.a 00:07:38.588 cp /var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a /var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:07:38.846 make: Leaving directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf' 00:07:38.846 23:49:17 autobuild_ocf_precompile -- common/autobuild_common.sh@25 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a' 00:07:38.846 23:49:17 autobuild_ocf_precompile -- common/autobuild_common.sh@27 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:07:38.846 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:07:38.846 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:07:39.412 Using 'verbs' RDMA provider 00:07:52.541 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal.log)...done. 00:08:04.735 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:08:04.993 Creating mk/config.mk...done. 00:08:04.993 Creating mk/cc.flags.mk...done. 00:08:04.993 Type 'make' to build. 00:08:04.993 00:08:04.993 real 0m55.149s 00:08:04.993 user 1m3.175s 00:08:04.993 sys 0m26.160s 00:08:04.994 23:49:43 autobuild_ocf_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:04.994 23:49:43 autobuild_ocf_precompile -- common/autotest_common.sh@10 -- $ set +x 00:08:04.994 ************************************ 00:08:04.994 END TEST autobuild_ocf_precompile 00:08:04.994 ************************************ 00:08:04.994 23:49:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:04.994 23:49:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:04.994 23:49:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:04.994 23:49:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:04.994 23:49:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:04.994 23:49:43 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a --with-shared 00:08:04.994 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:08:04.994 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:08:05.561 Using 'verbs' RDMA provider 00:08:18.696 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal.log)...done. 00:08:30.901 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:08:30.901 Creating mk/config.mk...done. 00:08:30.901 Creating mk/cc.flags.mk...done. 00:08:30.901 Type 'make' to build. 00:08:30.901 23:50:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:08:30.901 23:50:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:30.901 23:50:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:30.901 23:50:08 -- common/autotest_common.sh@10 -- $ set +x 00:08:30.901 ************************************ 00:08:30.901 START TEST make 00:08:30.901 ************************************ 00:08:30.901 23:50:08 make -- common/autotest_common.sh@1129 -- $ make -j48 00:08:30.901 make[1]: Nothing to be done for 'all'. 00:08:40.899 The Meson build system 00:08:40.899 Version: 1.5.0 00:08:40.899 Source dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk 00:08:40.899 Build dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp 00:08:40.899 Build type: native build 00:08:40.899 Program cat found: YES (/usr/bin/cat) 00:08:40.899 Project name: DPDK 00:08:40.899 Project version: 24.03.0 00:08:40.899 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:40.899 C linker for the host machine: cc ld.bfd 2.40-14 00:08:40.899 Host machine cpu family: x86_64 00:08:40.899 Host machine cpu: x86_64 00:08:40.899 Message: ## Building in Developer Mode ## 00:08:40.899 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:40.899 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:08:40.899 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:40.899 Program python3 found: YES (/usr/bin/python3) 00:08:40.899 Program cat found: YES (/usr/bin/cat) 00:08:40.899 Compiler for C supports arguments -march=native: YES 00:08:40.899 Checking for size of "void *" : 8 00:08:40.899 Checking for size of "void *" : 8 (cached) 00:08:40.899 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:40.899 Library m found: YES 00:08:40.899 Library numa found: YES 00:08:40.899 Has header "numaif.h" : YES 00:08:40.899 Library fdt found: NO 00:08:40.899 Library execinfo found: NO 00:08:40.899 Has header "execinfo.h" : YES 00:08:40.899 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:40.899 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:40.899 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:40.899 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:40.899 Run-time dependency openssl found: YES 3.1.1 00:08:40.899 Run-time dependency libpcap found: YES 1.10.4 00:08:40.899 Has header "pcap.h" with dependency libpcap: YES 00:08:40.899 Compiler for C supports arguments -Wcast-qual: YES 00:08:40.899 Compiler for C supports arguments -Wdeprecated: YES 00:08:40.899 Compiler for C supports arguments -Wformat: YES 00:08:40.899 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:40.899 Compiler for C supports arguments -Wformat-security: NO 00:08:40.899 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:40.899 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:40.899 Compiler for C supports arguments -Wnested-externs: YES 00:08:40.899 Compiler for C supports arguments -Wold-style-definition: YES 00:08:40.899 Compiler for C supports arguments -Wpointer-arith: YES 00:08:40.899 Compiler for C supports arguments -Wsign-compare: YES 00:08:40.899 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:40.899 Compiler for C supports arguments -Wundef: YES 00:08:40.899 Compiler for C supports arguments -Wwrite-strings: YES 00:08:40.899 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:40.899 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:40.899 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:40.899 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:40.899 Program objdump found: YES (/usr/bin/objdump) 00:08:40.899 Compiler for C supports arguments -mavx512f: YES 00:08:40.899 Checking if "AVX512 checking" compiles: YES 00:08:40.899 Fetching value of define "__SSE4_2__" : 1 00:08:40.899 Fetching value of define "__AES__" : 1 00:08:40.899 Fetching value of define "__AVX__" : 1 00:08:40.899 Fetching value of define "__AVX2__" : (undefined) 00:08:40.899 Fetching value of define "__AVX512BW__" : (undefined) 00:08:40.899 Fetching value of define "__AVX512CD__" : (undefined) 00:08:40.899 Fetching value of define "__AVX512DQ__" : (undefined) 00:08:40.899 Fetching value of define "__AVX512F__" : (undefined) 00:08:40.899 Fetching value of define "__AVX512VL__" : (undefined) 00:08:40.899 Fetching value of define "__PCLMUL__" : 1 00:08:40.899 Fetching value of define "__RDRND__" : 1 00:08:40.899 Fetching value of define "__RDSEED__" : (undefined) 00:08:40.899 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:40.899 Fetching value of define "__znver1__" : (undefined) 00:08:40.899 Fetching value of define "__znver2__" : (undefined) 00:08:40.899 Fetching value of define "__znver3__" : (undefined) 00:08:40.899 Fetching value of define "__znver4__" : (undefined) 00:08:40.899 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:40.899 Message: lib/log: Defining dependency "log" 00:08:40.899 Message: lib/kvargs: Defining dependency "kvargs" 00:08:40.899 Message: lib/telemetry: Defining dependency "telemetry" 00:08:40.899 Checking for function "getentropy" : NO 00:08:40.899 Message: lib/eal: Defining dependency "eal" 00:08:40.899 Message: lib/ring: Defining dependency "ring" 00:08:40.899 Message: lib/rcu: Defining dependency "rcu" 00:08:40.899 Message: lib/mempool: Defining dependency "mempool" 00:08:40.899 Message: lib/mbuf: Defining dependency "mbuf" 00:08:40.899 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:40.899 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:40.899 Compiler for C supports arguments -mpclmul: YES 00:08:40.899 Compiler for C supports arguments -maes: YES 00:08:40.899 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:40.899 Compiler for C supports arguments -mavx512bw: YES 00:08:40.899 Compiler for C supports arguments -mavx512dq: YES 00:08:40.899 Compiler for C supports arguments -mavx512vl: YES 00:08:40.899 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:40.899 Compiler for C supports arguments -mavx2: YES 00:08:40.899 Compiler for C supports arguments -mavx: YES 00:08:40.899 Message: lib/net: Defining dependency "net" 00:08:40.899 Message: lib/meter: Defining dependency "meter" 00:08:40.899 Message: lib/ethdev: Defining dependency "ethdev" 00:08:40.899 Message: lib/pci: Defining dependency "pci" 00:08:40.899 Message: lib/cmdline: Defining dependency "cmdline" 00:08:40.899 Message: lib/hash: Defining dependency "hash" 00:08:40.899 Message: lib/timer: Defining dependency "timer" 00:08:40.899 Message: lib/compressdev: Defining dependency "compressdev" 00:08:40.899 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:40.899 Message: lib/dmadev: Defining dependency "dmadev" 00:08:40.899 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:40.899 Message: lib/power: Defining dependency "power" 00:08:40.899 Message: lib/reorder: Defining dependency "reorder" 00:08:40.899 Message: lib/security: Defining dependency "security" 00:08:40.899 Has header "linux/userfaultfd.h" : YES 00:08:40.899 Has header "linux/vduse.h" : YES 00:08:40.899 Message: lib/vhost: Defining dependency "vhost" 00:08:40.899 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:40.899 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:40.899 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:40.899 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:40.899 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:40.899 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:40.899 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:40.899 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:40.899 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:40.899 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:40.899 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:40.899 Configuring doxy-api-html.conf using configuration 00:08:40.899 Configuring doxy-api-man.conf using configuration 00:08:40.899 Program mandb found: YES (/usr/bin/mandb) 00:08:40.899 Program sphinx-build found: NO 00:08:40.899 Configuring rte_build_config.h using configuration 00:08:40.899 Message: 00:08:40.899 ================= 00:08:40.899 Applications Enabled 00:08:40.899 ================= 00:08:40.899 00:08:40.899 apps: 00:08:40.899 00:08:40.899 00:08:40.899 Message: 00:08:40.899 ================= 00:08:40.899 Libraries Enabled 00:08:40.899 ================= 00:08:40.899 00:08:40.899 libs: 00:08:40.899 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:40.899 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:40.899 cryptodev, dmadev, power, reorder, security, vhost, 00:08:40.899 00:08:40.899 Message: 00:08:40.899 =============== 00:08:40.899 Drivers Enabled 00:08:40.899 =============== 00:08:40.899 00:08:40.899 common: 00:08:40.899 00:08:40.899 bus: 00:08:40.899 pci, vdev, 00:08:40.899 mempool: 00:08:40.899 ring, 00:08:40.899 dma: 00:08:40.899 00:08:40.899 net: 00:08:40.899 00:08:40.899 crypto: 00:08:40.899 00:08:40.899 compress: 00:08:40.899 00:08:40.899 vdpa: 00:08:40.899 00:08:40.899 00:08:40.899 Message: 00:08:40.899 ================= 00:08:40.899 Content Skipped 00:08:40.899 ================= 00:08:40.899 00:08:40.899 apps: 00:08:40.899 dumpcap: explicitly disabled via build config 00:08:40.899 graph: explicitly disabled via build config 00:08:40.899 pdump: explicitly disabled via build config 00:08:40.899 proc-info: explicitly disabled via build config 00:08:40.899 test-acl: explicitly disabled via build config 00:08:40.899 test-bbdev: explicitly disabled via build config 00:08:40.899 test-cmdline: explicitly disabled via build config 00:08:40.899 test-compress-perf: explicitly disabled via build config 00:08:40.899 test-crypto-perf: explicitly disabled via build config 00:08:40.899 test-dma-perf: explicitly disabled via build config 00:08:40.899 test-eventdev: explicitly disabled via build config 00:08:40.899 test-fib: explicitly disabled via build config 00:08:40.899 test-flow-perf: explicitly disabled via build config 00:08:40.899 test-gpudev: explicitly disabled via build config 00:08:40.899 test-mldev: explicitly disabled via build config 00:08:40.900 test-pipeline: explicitly disabled via build config 00:08:40.900 test-pmd: explicitly disabled via build config 00:08:40.900 test-regex: explicitly disabled via build config 00:08:40.900 test-sad: explicitly disabled via build config 00:08:40.900 test-security-perf: explicitly disabled via build config 00:08:40.900 00:08:40.900 libs: 00:08:40.900 argparse: explicitly disabled via build config 00:08:40.900 metrics: explicitly disabled via build config 00:08:40.900 acl: explicitly disabled via build config 00:08:40.900 bbdev: explicitly disabled via build config 00:08:40.900 bitratestats: explicitly disabled via build config 00:08:40.900 bpf: explicitly disabled via build config 00:08:40.900 cfgfile: explicitly disabled via build config 00:08:40.900 distributor: explicitly disabled via build config 00:08:40.900 efd: explicitly disabled via build config 00:08:40.900 eventdev: explicitly disabled via build config 00:08:40.900 dispatcher: explicitly disabled via build config 00:08:40.900 gpudev: explicitly disabled via build config 00:08:40.900 gro: explicitly disabled via build config 00:08:40.900 gso: explicitly disabled via build config 00:08:40.900 ip_frag: explicitly disabled via build config 00:08:40.900 jobstats: explicitly disabled via build config 00:08:40.900 latencystats: explicitly disabled via build config 00:08:40.900 lpm: explicitly disabled via build config 00:08:40.900 member: explicitly disabled via build config 00:08:40.900 pcapng: explicitly disabled via build config 00:08:40.900 rawdev: explicitly disabled via build config 00:08:40.900 regexdev: explicitly disabled via build config 00:08:40.900 mldev: explicitly disabled via build config 00:08:40.900 rib: explicitly disabled via build config 00:08:40.900 sched: explicitly disabled via build config 00:08:40.900 stack: explicitly disabled via build config 00:08:40.900 ipsec: explicitly disabled via build config 00:08:40.900 pdcp: explicitly disabled via build config 00:08:40.900 fib: explicitly disabled via build config 00:08:40.900 port: explicitly disabled via build config 00:08:40.900 pdump: explicitly disabled via build config 00:08:40.900 table: explicitly disabled via build config 00:08:40.900 pipeline: explicitly disabled via build config 00:08:40.900 graph: explicitly disabled via build config 00:08:40.900 node: explicitly disabled via build config 00:08:40.900 00:08:40.900 drivers: 00:08:40.900 common/cpt: not in enabled drivers build config 00:08:40.900 common/dpaax: not in enabled drivers build config 00:08:40.900 common/iavf: not in enabled drivers build config 00:08:40.900 common/idpf: not in enabled drivers build config 00:08:40.900 common/ionic: not in enabled drivers build config 00:08:40.900 common/mvep: not in enabled drivers build config 00:08:40.900 common/octeontx: not in enabled drivers build config 00:08:40.900 bus/auxiliary: not in enabled drivers build config 00:08:40.900 bus/cdx: not in enabled drivers build config 00:08:40.900 bus/dpaa: not in enabled drivers build config 00:08:40.900 bus/fslmc: not in enabled drivers build config 00:08:40.900 bus/ifpga: not in enabled drivers build config 00:08:40.900 bus/platform: not in enabled drivers build config 00:08:40.900 bus/uacce: not in enabled drivers build config 00:08:40.900 bus/vmbus: not in enabled drivers build config 00:08:40.900 common/cnxk: not in enabled drivers build config 00:08:40.900 common/mlx5: not in enabled drivers build config 00:08:40.900 common/nfp: not in enabled drivers build config 00:08:40.900 common/nitrox: not in enabled drivers build config 00:08:40.900 common/qat: not in enabled drivers build config 00:08:40.900 common/sfc_efx: not in enabled drivers build config 00:08:40.900 mempool/bucket: not in enabled drivers build config 00:08:40.900 mempool/cnxk: not in enabled drivers build config 00:08:40.900 mempool/dpaa: not in enabled drivers build config 00:08:40.900 mempool/dpaa2: not in enabled drivers build config 00:08:40.900 mempool/octeontx: not in enabled drivers build config 00:08:40.900 mempool/stack: not in enabled drivers build config 00:08:40.900 dma/cnxk: not in enabled drivers build config 00:08:40.900 dma/dpaa: not in enabled drivers build config 00:08:40.900 dma/dpaa2: not in enabled drivers build config 00:08:40.900 dma/hisilicon: not in enabled drivers build config 00:08:40.900 dma/idxd: not in enabled drivers build config 00:08:40.900 dma/ioat: not in enabled drivers build config 00:08:40.900 dma/skeleton: not in enabled drivers build config 00:08:40.900 net/af_packet: not in enabled drivers build config 00:08:40.900 net/af_xdp: not in enabled drivers build config 00:08:40.900 net/ark: not in enabled drivers build config 00:08:40.900 net/atlantic: not in enabled drivers build config 00:08:40.900 net/avp: not in enabled drivers build config 00:08:40.900 net/axgbe: not in enabled drivers build config 00:08:40.900 net/bnx2x: not in enabled drivers build config 00:08:40.900 net/bnxt: not in enabled drivers build config 00:08:40.900 net/bonding: not in enabled drivers build config 00:08:40.900 net/cnxk: not in enabled drivers build config 00:08:40.900 net/cpfl: not in enabled drivers build config 00:08:40.900 net/cxgbe: not in enabled drivers build config 00:08:40.900 net/dpaa: not in enabled drivers build config 00:08:40.900 net/dpaa2: not in enabled drivers build config 00:08:40.900 net/e1000: not in enabled drivers build config 00:08:40.900 net/ena: not in enabled drivers build config 00:08:40.900 net/enetc: not in enabled drivers build config 00:08:40.900 net/enetfec: not in enabled drivers build config 00:08:40.900 net/enic: not in enabled drivers build config 00:08:40.900 net/failsafe: not in enabled drivers build config 00:08:40.900 net/fm10k: not in enabled drivers build config 00:08:40.900 net/gve: not in enabled drivers build config 00:08:40.900 net/hinic: not in enabled drivers build config 00:08:40.900 net/hns3: not in enabled drivers build config 00:08:40.900 net/i40e: not in enabled drivers build config 00:08:40.900 net/iavf: not in enabled drivers build config 00:08:40.900 net/ice: not in enabled drivers build config 00:08:40.900 net/idpf: not in enabled drivers build config 00:08:40.900 net/igc: not in enabled drivers build config 00:08:40.900 net/ionic: not in enabled drivers build config 00:08:40.900 net/ipn3ke: not in enabled drivers build config 00:08:40.900 net/ixgbe: not in enabled drivers build config 00:08:40.900 net/mana: not in enabled drivers build config 00:08:40.900 net/memif: not in enabled drivers build config 00:08:40.900 net/mlx4: not in enabled drivers build config 00:08:40.900 net/mlx5: not in enabled drivers build config 00:08:40.900 net/mvneta: not in enabled drivers build config 00:08:40.900 net/mvpp2: not in enabled drivers build config 00:08:40.900 net/netvsc: not in enabled drivers build config 00:08:40.900 net/nfb: not in enabled drivers build config 00:08:40.900 net/nfp: not in enabled drivers build config 00:08:40.900 net/ngbe: not in enabled drivers build config 00:08:40.900 net/null: not in enabled drivers build config 00:08:40.900 net/octeontx: not in enabled drivers build config 00:08:40.900 net/octeon_ep: not in enabled drivers build config 00:08:40.900 net/pcap: not in enabled drivers build config 00:08:40.900 net/pfe: not in enabled drivers build config 00:08:40.900 net/qede: not in enabled drivers build config 00:08:40.900 net/ring: not in enabled drivers build config 00:08:40.900 net/sfc: not in enabled drivers build config 00:08:40.900 net/softnic: not in enabled drivers build config 00:08:40.900 net/tap: not in enabled drivers build config 00:08:40.900 net/thunderx: not in enabled drivers build config 00:08:40.900 net/txgbe: not in enabled drivers build config 00:08:40.900 net/vdev_netvsc: not in enabled drivers build config 00:08:40.900 net/vhost: not in enabled drivers build config 00:08:40.900 net/virtio: not in enabled drivers build config 00:08:40.900 net/vmxnet3: not in enabled drivers build config 00:08:40.900 raw/*: missing internal dependency, "rawdev" 00:08:40.900 crypto/armv8: not in enabled drivers build config 00:08:40.900 crypto/bcmfs: not in enabled drivers build config 00:08:40.900 crypto/caam_jr: not in enabled drivers build config 00:08:40.900 crypto/ccp: not in enabled drivers build config 00:08:40.900 crypto/cnxk: not in enabled drivers build config 00:08:40.900 crypto/dpaa_sec: not in enabled drivers build config 00:08:40.900 crypto/dpaa2_sec: not in enabled drivers build config 00:08:40.900 crypto/ipsec_mb: not in enabled drivers build config 00:08:40.900 crypto/mlx5: not in enabled drivers build config 00:08:40.900 crypto/mvsam: not in enabled drivers build config 00:08:40.900 crypto/nitrox: not in enabled drivers build config 00:08:40.900 crypto/null: not in enabled drivers build config 00:08:40.900 crypto/octeontx: not in enabled drivers build config 00:08:40.900 crypto/openssl: not in enabled drivers build config 00:08:40.900 crypto/scheduler: not in enabled drivers build config 00:08:40.900 crypto/uadk: not in enabled drivers build config 00:08:40.900 crypto/virtio: not in enabled drivers build config 00:08:40.900 compress/isal: not in enabled drivers build config 00:08:40.900 compress/mlx5: not in enabled drivers build config 00:08:40.900 compress/nitrox: not in enabled drivers build config 00:08:40.900 compress/octeontx: not in enabled drivers build config 00:08:40.900 compress/zlib: not in enabled drivers build config 00:08:40.900 regex/*: missing internal dependency, "regexdev" 00:08:40.900 ml/*: missing internal dependency, "mldev" 00:08:40.900 vdpa/ifc: not in enabled drivers build config 00:08:40.900 vdpa/mlx5: not in enabled drivers build config 00:08:40.900 vdpa/nfp: not in enabled drivers build config 00:08:40.900 vdpa/sfc: not in enabled drivers build config 00:08:40.900 event/*: missing internal dependency, "eventdev" 00:08:40.900 baseband/*: missing internal dependency, "bbdev" 00:08:40.900 gpu/*: missing internal dependency, "gpudev" 00:08:40.900 00:08:40.900 00:08:40.900 Build targets in project: 85 00:08:40.900 00:08:40.900 DPDK 24.03.0 00:08:40.900 00:08:40.900 User defined options 00:08:40.900 buildtype : debug 00:08:40.900 default_library : shared 00:08:40.900 libdir : lib 00:08:40.900 prefix : /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:08:40.900 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:40.900 c_link_args : 00:08:40.900 cpu_instruction_set: native 00:08:40.900 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:08:40.900 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:08:40.900 enable_docs : false 00:08:40.900 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:08:40.900 enable_kmods : false 00:08:40.900 max_lcores : 128 00:08:40.900 tests : false 00:08:40.900 00:08:40.900 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:40.900 ninja: Entering directory `/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp' 00:08:40.901 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:40.901 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:40.901 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:41.161 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:41.161 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:41.161 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:41.161 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:41.161 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:41.161 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:41.161 [10/268] Linking static target lib/librte_kvargs.a 00:08:41.161 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:41.161 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:41.161 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:41.161 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:41.161 [15/268] Linking static target lib/librte_log.a 00:08:41.161 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:41.732 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.992 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:41.992 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:41.992 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:41.992 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:41.992 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:41.992 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:41.992 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:41.992 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:41.992 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:41.992 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:41.992 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:41.992 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:41.992 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:41.992 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:41.992 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:41.992 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:41.992 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:41.992 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:41.992 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:41.992 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:41.992 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:41.992 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:41.992 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:41.992 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:41.992 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:41.992 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:41.992 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:41.992 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:41.992 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:41.992 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:41.992 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:41.992 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:41.992 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:41.992 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:41.992 [52/268] Linking static target lib/librte_telemetry.a 00:08:41.992 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:41.992 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:41.992 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:41.992 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:41.992 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:41.992 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:41.992 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:42.254 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:42.254 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:42.254 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:42.254 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:42.254 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:42.513 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:42.513 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:42.513 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:42.513 [68/268] Linking target lib/librte_log.so.24.1 00:08:42.774 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:42.774 [70/268] Linking static target lib/librte_pci.a 00:08:42.774 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:42.774 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:42.774 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:42.774 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:42.774 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:42.774 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:42.774 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:42.774 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:42.774 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:42.775 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:42.775 [81/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:43.045 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:43.045 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:43.045 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:43.045 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:43.045 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:43.045 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:43.045 [88/268] Linking static target lib/librte_ring.a 00:08:43.045 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:43.045 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:43.045 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:43.045 [92/268] Linking target lib/librte_kvargs.so.24.1 00:08:43.045 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:43.045 [94/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:43.045 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:43.045 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:43.045 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:43.045 [98/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.045 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:43.045 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:43.045 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:43.045 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:43.045 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:43.045 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:43.045 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:43.045 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:43.045 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:43.045 [108/268] Linking static target lib/librte_meter.a 00:08:43.045 [109/268] Linking target lib/librte_telemetry.so.24.1 00:08:43.045 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.307 [111/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:43.307 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:43.307 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:43.307 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:43.307 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:43.307 [116/268] Linking static target lib/librte_rcu.a 00:08:43.307 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:43.307 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:43.307 [119/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:43.307 [120/268] Linking static target lib/librte_eal.a 00:08:43.307 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:43.307 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:43.307 [123/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:43.307 [124/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:43.307 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:43.307 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:43.307 [127/268] Linking static target lib/librte_mempool.a 00:08:43.307 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:43.307 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:43.307 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:43.575 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:43.575 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:43.575 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:43.575 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:43.575 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:43.575 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:43.575 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:43.575 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.575 [139/268] Linking static target lib/librte_net.a 00:08:43.576 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:43.576 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.835 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:43.835 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:43.835 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:43.835 [145/268] Linking static target lib/librte_cmdline.a 00:08:43.835 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:43.836 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:43.836 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.836 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:43.836 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:44.096 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:44.096 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:44.096 [153/268] Linking static target lib/librte_timer.a 00:08:44.096 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:44.096 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:44.096 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:44.096 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.096 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:44.096 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:44.096 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:44.096 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:44.096 [162/268] Linking static target lib/librte_dmadev.a 00:08:44.355 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:44.355 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:44.355 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:44.355 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:44.355 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:44.355 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.355 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:44.355 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:44.355 [171/268] Linking static target lib/librte_compressdev.a 00:08:44.355 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.355 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:44.355 [174/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:44.355 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:44.614 [176/268] Linking static target lib/librte_power.a 00:08:44.614 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:44.614 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:44.614 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:44.614 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:44.614 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:44.614 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:44.614 [183/268] Linking static target lib/librte_hash.a 00:08:44.614 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:44.614 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:44.614 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:44.614 [187/268] Linking static target lib/librte_reorder.a 00:08:44.614 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:44.614 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.614 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:44.614 [191/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:44.873 [192/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:44.873 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.873 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:44.873 [195/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.873 [196/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:44.873 [197/268] Linking static target lib/librte_mbuf.a 00:08:44.873 [198/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.873 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:44.873 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:44.873 [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:44.873 [202/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:44.873 [203/268] Linking static target drivers/librte_bus_pci.a 00:08:44.873 [204/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:44.873 [205/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:44.873 [206/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:44.873 [207/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:44.873 [208/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:44.873 [209/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:44.873 [210/268] Linking static target lib/librte_security.a 00:08:44.873 [211/268] Linking static target drivers/librte_bus_vdev.a 00:08:45.131 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.131 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.131 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:45.131 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:45.131 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:45.131 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:45.131 [218/268] Linking static target drivers/librte_mempool_ring.a 00:08:45.131 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.131 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:45.390 [221/268] Linking static target lib/librte_ethdev.a 00:08:45.390 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.390 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.390 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.390 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:45.390 [226/268] Linking static target lib/librte_cryptodev.a 00:08:46.764 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.697 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:49.597 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.856 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.856 [231/268] Linking target lib/librte_eal.so.24.1 00:08:50.115 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:50.115 [233/268] Linking target lib/librte_pci.so.24.1 00:08:50.115 [234/268] Linking target lib/librte_ring.so.24.1 00:08:50.115 [235/268] Linking target lib/librte_meter.so.24.1 00:08:50.115 [236/268] Linking target lib/librte_timer.so.24.1 00:08:50.115 [237/268] Linking target lib/librte_dmadev.so.24.1 00:08:50.115 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:50.115 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:50.115 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:50.115 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:50.115 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:50.115 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:50.115 [244/268] Linking target lib/librte_rcu.so.24.1 00:08:50.115 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:50.115 [246/268] Linking target lib/librte_mempool.so.24.1 00:08:50.373 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:50.373 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:50.373 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:50.373 [250/268] Linking target lib/librte_mbuf.so.24.1 00:08:50.632 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:50.632 [252/268] Linking target lib/librte_reorder.so.24.1 00:08:50.632 [253/268] Linking target lib/librte_compressdev.so.24.1 00:08:50.632 [254/268] Linking target lib/librte_net.so.24.1 00:08:50.632 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:08:50.890 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:50.890 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:50.890 [258/268] Linking target lib/librte_hash.so.24.1 00:08:50.890 [259/268] Linking target lib/librte_security.so.24.1 00:08:50.890 [260/268] Linking target lib/librte_cmdline.so.24.1 00:08:51.148 [261/268] Linking target lib/librte_ethdev.so.24.1 00:08:51.148 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:51.148 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:51.407 [264/268] Linking target lib/librte_power.so.24.1 00:08:55.595 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:55.595 [266/268] Linking static target lib/librte_vhost.a 00:08:56.161 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:56.161 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:56.161 INFO: autodetecting backend as ninja 00:08:56.161 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp -j 48 00:09:18.084 make[3]: '/var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a' is up to date. 00:09:18.084 CC lib/ut_mock/mock.o 00:09:18.084 CC lib/log/log.o 00:09:18.084 CC lib/log/log_flags.o 00:09:18.084 CC lib/log/log_deprecated.o 00:09:18.084 CC lib/ut/ut.o 00:09:18.084 LIB libspdk_log.a 00:09:18.084 LIB libspdk_ut_mock.a 00:09:18.084 SO libspdk_ut_mock.so.6.0 00:09:18.084 SO libspdk_log.so.7.1 00:09:18.084 LIB libspdk_ut.a 00:09:18.084 SO libspdk_ut.so.2.0 00:09:18.084 SYMLINK libspdk_ut_mock.so 00:09:18.084 SYMLINK libspdk_log.so 00:09:18.084 SYMLINK libspdk_ut.so 00:09:18.084 CXX lib/trace_parser/trace.o 00:09:18.084 CC lib/dma/dma.o 00:09:18.084 CC lib/ioat/ioat.o 00:09:18.084 CC lib/util/base64.o 00:09:18.084 CC lib/util/bit_array.o 00:09:18.084 CC lib/util/cpuset.o 00:09:18.084 CC lib/util/crc16.o 00:09:18.084 CC lib/util/crc32.o 00:09:18.084 CC lib/util/crc32c.o 00:09:18.084 CC lib/util/crc32_ieee.o 00:09:18.084 CC lib/util/crc64.o 00:09:18.084 CC lib/util/dif.o 00:09:18.084 CC lib/util/fd.o 00:09:18.084 CC lib/util/fd_group.o 00:09:18.084 CC lib/util/file.o 00:09:18.084 CC lib/util/hexlify.o 00:09:18.084 CC lib/util/iov.o 00:09:18.084 CC lib/util/math.o 00:09:18.084 CC lib/util/net.o 00:09:18.084 CC lib/util/pipe.o 00:09:18.084 CC lib/util/strerror_tls.o 00:09:18.084 CC lib/util/string.o 00:09:18.084 CC lib/util/uuid.o 00:09:18.084 CC lib/util/xor.o 00:09:18.084 CC lib/util/zipf.o 00:09:18.084 CC lib/util/md5.o 00:09:18.084 CC lib/vfio_user/host/vfio_user_pci.o 00:09:18.084 CC lib/vfio_user/host/vfio_user.o 00:09:18.084 LIB libspdk_dma.a 00:09:18.084 SO libspdk_dma.so.5.0 00:09:18.084 SYMLINK libspdk_dma.so 00:09:18.084 LIB libspdk_ioat.a 00:09:18.084 SO libspdk_ioat.so.7.0 00:09:18.084 SYMLINK libspdk_ioat.so 00:09:18.084 LIB libspdk_vfio_user.a 00:09:18.084 SO libspdk_vfio_user.so.5.0 00:09:18.084 SYMLINK libspdk_vfio_user.so 00:09:18.085 LIB libspdk_util.a 00:09:18.085 SO libspdk_util.so.10.1 00:09:18.342 SYMLINK libspdk_util.so 00:09:18.342 CC lib/rdma_utils/rdma_utils.o 00:09:18.342 CC lib/env_dpdk/env.o 00:09:18.342 CC lib/idxd/idxd.o 00:09:18.342 CC lib/env_dpdk/memory.o 00:09:18.342 CC lib/env_dpdk/pci.o 00:09:18.342 CC lib/idxd/idxd_user.o 00:09:18.342 CC lib/env_dpdk/init.o 00:09:18.342 CC lib/conf/conf.o 00:09:18.342 CC lib/idxd/idxd_kernel.o 00:09:18.342 CC lib/env_dpdk/threads.o 00:09:18.342 CC lib/env_dpdk/pci_ioat.o 00:09:18.342 CC lib/env_dpdk/pci_virtio.o 00:09:18.342 CC lib/json/json_parse.o 00:09:18.342 CC lib/vmd/vmd.o 00:09:18.342 CC lib/env_dpdk/pci_vmd.o 00:09:18.342 CC lib/json/json_util.o 00:09:18.342 CC lib/env_dpdk/pci_idxd.o 00:09:18.342 CC lib/vmd/led.o 00:09:18.342 CC lib/json/json_write.o 00:09:18.342 CC lib/env_dpdk/pci_event.o 00:09:18.342 CC lib/env_dpdk/sigbus_handler.o 00:09:18.342 CC lib/env_dpdk/pci_dpdk.o 00:09:18.342 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:18.342 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:18.600 LIB libspdk_trace_parser.a 00:09:18.600 SO libspdk_trace_parser.so.6.0 00:09:18.600 SYMLINK libspdk_trace_parser.so 00:09:18.600 LIB libspdk_conf.a 00:09:18.600 SO libspdk_conf.so.6.0 00:09:18.600 LIB libspdk_rdma_utils.a 00:09:18.857 LIB libspdk_json.a 00:09:18.857 SYMLINK libspdk_conf.so 00:09:18.857 SO libspdk_rdma_utils.so.1.0 00:09:18.857 SO libspdk_json.so.6.0 00:09:18.857 SYMLINK libspdk_rdma_utils.so 00:09:18.857 SYMLINK libspdk_json.so 00:09:18.857 CC lib/rdma_provider/common.o 00:09:18.857 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:18.857 CC lib/jsonrpc/jsonrpc_server.o 00:09:18.857 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:18.857 CC lib/jsonrpc/jsonrpc_client.o 00:09:18.857 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:19.115 LIB libspdk_idxd.a 00:09:19.115 SO libspdk_idxd.so.12.1 00:09:19.115 LIB libspdk_vmd.a 00:09:19.115 SYMLINK libspdk_idxd.so 00:09:19.115 SO libspdk_vmd.so.6.0 00:09:19.115 SYMLINK libspdk_vmd.so 00:09:19.115 LIB libspdk_rdma_provider.a 00:09:19.115 SO libspdk_rdma_provider.so.7.0 00:09:19.373 LIB libspdk_jsonrpc.a 00:09:19.373 SO libspdk_jsonrpc.so.6.0 00:09:19.373 SYMLINK libspdk_rdma_provider.so 00:09:19.373 SYMLINK libspdk_jsonrpc.so 00:09:19.631 CC lib/rpc/rpc.o 00:09:19.889 LIB libspdk_rpc.a 00:09:20.148 SO libspdk_rpc.so.6.0 00:09:20.148 SYMLINK libspdk_rpc.so 00:09:20.406 CC lib/trace/trace.o 00:09:20.406 CC lib/trace/trace_flags.o 00:09:20.406 CC lib/trace/trace_rpc.o 00:09:20.406 CC lib/keyring/keyring.o 00:09:20.406 CC lib/keyring/keyring_rpc.o 00:09:20.406 CC lib/notify/notify.o 00:09:20.406 CC lib/notify/notify_rpc.o 00:09:20.406 LIB libspdk_notify.a 00:09:20.406 SO libspdk_notify.so.6.0 00:09:20.664 SYMLINK libspdk_notify.so 00:09:20.664 LIB libspdk_keyring.a 00:09:20.664 LIB libspdk_trace.a 00:09:20.664 SO libspdk_keyring.so.2.0 00:09:20.664 SO libspdk_trace.so.11.0 00:09:20.664 SYMLINK libspdk_keyring.so 00:09:20.664 SYMLINK libspdk_trace.so 00:09:20.922 CC lib/thread/thread.o 00:09:20.922 CC lib/thread/iobuf.o 00:09:20.922 CC lib/sock/sock_rpc.o 00:09:20.922 CC lib/sock/sock.o 00:09:21.181 LIB libspdk_env_dpdk.a 00:09:21.181 SO libspdk_env_dpdk.so.15.1 00:09:21.439 SYMLINK libspdk_env_dpdk.so 00:09:21.698 LIB libspdk_sock.a 00:09:21.698 SO libspdk_sock.so.10.0 00:09:21.698 SYMLINK libspdk_sock.so 00:09:21.957 CC lib/nvme/nvme_ctrlr.o 00:09:21.957 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:21.957 CC lib/nvme/nvme_fabric.o 00:09:21.957 CC lib/nvme/nvme_ns_cmd.o 00:09:21.957 CC lib/nvme/nvme_ns.o 00:09:21.957 CC lib/nvme/nvme_pcie_common.o 00:09:21.957 CC lib/nvme/nvme_pcie.o 00:09:21.957 CC lib/nvme/nvme_qpair.o 00:09:21.957 CC lib/nvme/nvme.o 00:09:21.957 CC lib/nvme/nvme_quirks.o 00:09:21.957 CC lib/nvme/nvme_transport.o 00:09:21.957 CC lib/nvme/nvme_discovery.o 00:09:21.957 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:21.957 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:21.957 CC lib/nvme/nvme_tcp.o 00:09:21.957 CC lib/nvme/nvme_opal.o 00:09:21.957 CC lib/nvme/nvme_io_msg.o 00:09:21.957 CC lib/nvme/nvme_poll_group.o 00:09:21.957 CC lib/nvme/nvme_zns.o 00:09:21.957 CC lib/nvme/nvme_stubs.o 00:09:21.957 CC lib/nvme/nvme_auth.o 00:09:21.957 CC lib/nvme/nvme_cuse.o 00:09:21.957 CC lib/nvme/nvme_rdma.o 00:09:22.893 LIB libspdk_thread.a 00:09:22.893 SO libspdk_thread.so.11.0 00:09:22.893 SYMLINK libspdk_thread.so 00:09:23.152 CC lib/blob/blobstore.o 00:09:23.152 CC lib/virtio/virtio.o 00:09:23.152 CC lib/fsdev/fsdev.o 00:09:23.152 CC lib/init/json_config.o 00:09:23.152 CC lib/accel/accel.o 00:09:23.152 CC lib/blob/request.o 00:09:23.152 CC lib/blob/zeroes.o 00:09:23.152 CC lib/fsdev/fsdev_io.o 00:09:23.152 CC lib/init/subsystem.o 00:09:23.152 CC lib/virtio/virtio_vhost_user.o 00:09:23.152 CC lib/fsdev/fsdev_rpc.o 00:09:23.152 CC lib/accel/accel_rpc.o 00:09:23.152 CC lib/init/subsystem_rpc.o 00:09:23.152 CC lib/blob/blob_bs_dev.o 00:09:23.152 CC lib/accel/accel_sw.o 00:09:23.152 CC lib/init/rpc.o 00:09:23.152 CC lib/virtio/virtio_vfio_user.o 00:09:23.152 CC lib/virtio/virtio_pci.o 00:09:23.410 LIB libspdk_init.a 00:09:23.410 SO libspdk_init.so.6.0 00:09:23.410 SYMLINK libspdk_init.so 00:09:23.410 LIB libspdk_virtio.a 00:09:23.668 SO libspdk_virtio.so.7.0 00:09:23.668 SYMLINK libspdk_virtio.so 00:09:23.668 CC lib/event/app.o 00:09:23.668 CC lib/event/reactor.o 00:09:23.668 CC lib/event/log_rpc.o 00:09:23.668 CC lib/event/app_rpc.o 00:09:23.668 CC lib/event/scheduler_static.o 00:09:23.926 LIB libspdk_fsdev.a 00:09:23.926 SO libspdk_fsdev.so.2.0 00:09:24.184 SYMLINK libspdk_fsdev.so 00:09:24.184 LIB libspdk_event.a 00:09:24.184 SO libspdk_event.so.14.0 00:09:24.184 SYMLINK libspdk_event.so 00:09:24.184 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:24.443 LIB libspdk_accel.a 00:09:24.701 SO libspdk_accel.so.16.0 00:09:24.701 SYMLINK libspdk_accel.so 00:09:24.701 LIB libspdk_nvme.a 00:09:24.959 SO libspdk_nvme.so.15.0 00:09:24.959 CC lib/bdev/bdev.o 00:09:24.959 CC lib/bdev/bdev_rpc.o 00:09:24.959 CC lib/bdev/bdev_zone.o 00:09:24.959 CC lib/bdev/part.o 00:09:24.959 CC lib/bdev/scsi_nvme.o 00:09:25.217 SYMLINK libspdk_nvme.so 00:09:25.217 LIB libspdk_fuse_dispatcher.a 00:09:25.217 SO libspdk_fuse_dispatcher.so.1.0 00:09:25.475 SYMLINK libspdk_fuse_dispatcher.so 00:09:29.007 LIB libspdk_blob.a 00:09:29.007 SO libspdk_blob.so.12.0 00:09:29.007 SYMLINK libspdk_blob.so 00:09:29.007 CC lib/lvol/lvol.o 00:09:29.007 CC lib/blobfs/blobfs.o 00:09:29.007 CC lib/blobfs/tree.o 00:09:30.383 LIB libspdk_blobfs.a 00:09:30.383 SO libspdk_blobfs.so.11.0 00:09:30.383 SYMLINK libspdk_blobfs.so 00:09:30.949 LIB libspdk_lvol.a 00:09:30.949 SO libspdk_lvol.so.11.0 00:09:30.949 LIB libspdk_bdev.a 00:09:30.949 SYMLINK libspdk_lvol.so 00:09:30.949 SO libspdk_bdev.so.17.0 00:09:31.208 SYMLINK libspdk_bdev.so 00:09:31.208 CC lib/ublk/ublk.o 00:09:31.208 CC lib/nbd/nbd.o 00:09:31.208 CC lib/nvmf/ctrlr.o 00:09:31.208 CC lib/ublk/ublk_rpc.o 00:09:31.208 CC lib/nvmf/ctrlr_discovery.o 00:09:31.208 CC lib/nbd/nbd_rpc.o 00:09:31.208 CC lib/nvmf/ctrlr_bdev.o 00:09:31.208 CC lib/nvmf/subsystem.o 00:09:31.208 CC lib/scsi/dev.o 00:09:31.208 CC lib/nvmf/nvmf.o 00:09:31.208 CC lib/scsi/lun.o 00:09:31.208 CC lib/nvmf/nvmf_rpc.o 00:09:31.208 CC lib/scsi/port.o 00:09:31.208 CC lib/nvmf/transport.o 00:09:31.208 CC lib/scsi/scsi.o 00:09:31.208 CC lib/nvmf/tcp.o 00:09:31.208 CC lib/scsi/scsi_bdev.o 00:09:31.208 CC lib/nvmf/stubs.o 00:09:31.208 CC lib/nvmf/mdns_server.o 00:09:31.208 CC lib/scsi/scsi_pr.o 00:09:31.208 CC lib/nvmf/rdma.o 00:09:31.208 CC lib/scsi/task.o 00:09:31.208 CC lib/scsi/scsi_rpc.o 00:09:31.208 CC lib/ftl/ftl_core.o 00:09:31.208 CC lib/nvmf/auth.o 00:09:31.208 CC lib/ftl/ftl_layout.o 00:09:31.208 CC lib/ftl/ftl_init.o 00:09:31.208 CC lib/ftl/ftl_debug.o 00:09:31.208 CC lib/ftl/ftl_sb.o 00:09:31.208 CC lib/ftl/ftl_io.o 00:09:31.208 CC lib/ftl/ftl_l2p.o 00:09:31.208 CC lib/ftl/ftl_l2p_flat.o 00:09:31.208 CC lib/ftl/ftl_band.o 00:09:31.208 CC lib/ftl/ftl_nv_cache.o 00:09:31.208 CC lib/ftl/ftl_band_ops.o 00:09:31.208 CC lib/ftl/ftl_writer.o 00:09:31.208 CC lib/ftl/ftl_reloc.o 00:09:31.208 CC lib/ftl/ftl_rq.o 00:09:31.208 CC lib/ftl/ftl_l2p_cache.o 00:09:31.208 CC lib/ftl/ftl_p2l.o 00:09:31.208 CC lib/ftl/mngt/ftl_mngt.o 00:09:31.208 CC lib/ftl/ftl_p2l_log.o 00:09:31.208 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:31.208 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:31.472 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:31.472 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:31.472 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:31.472 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:31.737 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:31.737 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:31.737 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:31.737 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:31.737 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:31.737 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:31.737 CC lib/ftl/utils/ftl_conf.o 00:09:31.737 CC lib/ftl/utils/ftl_md.o 00:09:31.737 CC lib/ftl/utils/ftl_mempool.o 00:09:31.737 CC lib/ftl/utils/ftl_bitmap.o 00:09:31.737 CC lib/ftl/utils/ftl_property.o 00:09:31.737 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:31.737 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:31.737 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:31.737 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:31.737 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:31.737 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:31.996 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:31.996 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:31.996 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:31.996 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:31.996 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:31.996 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:31.996 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:31.996 CC lib/ftl/base/ftl_base_dev.o 00:09:31.996 CC lib/ftl/base/ftl_base_bdev.o 00:09:31.996 CC lib/ftl/ftl_trace.o 00:09:32.255 LIB libspdk_nbd.a 00:09:32.255 SO libspdk_nbd.so.7.0 00:09:32.255 SYMLINK libspdk_nbd.so 00:09:32.255 LIB libspdk_scsi.a 00:09:32.255 SO libspdk_scsi.so.9.0 00:09:32.513 SYMLINK libspdk_scsi.so 00:09:32.513 LIB libspdk_ublk.a 00:09:32.513 SO libspdk_ublk.so.3.0 00:09:32.513 SYMLINK libspdk_ublk.so 00:09:32.513 CC lib/vhost/vhost.o 00:09:32.513 CC lib/iscsi/conn.o 00:09:32.513 CC lib/vhost/vhost_rpc.o 00:09:32.513 CC lib/iscsi/init_grp.o 00:09:32.513 CC lib/vhost/vhost_scsi.o 00:09:32.513 CC lib/vhost/vhost_blk.o 00:09:32.513 CC lib/iscsi/iscsi.o 00:09:32.513 CC lib/vhost/rte_vhost_user.o 00:09:32.513 CC lib/iscsi/param.o 00:09:32.513 CC lib/iscsi/portal_grp.o 00:09:32.513 CC lib/iscsi/tgt_node.o 00:09:32.513 CC lib/iscsi/iscsi_subsystem.o 00:09:32.513 CC lib/iscsi/iscsi_rpc.o 00:09:32.513 CC lib/iscsi/task.o 00:09:32.771 LIB libspdk_ftl.a 00:09:33.029 SO libspdk_ftl.so.9.0 00:09:33.286 SYMLINK libspdk_ftl.so 00:09:34.219 LIB libspdk_nvmf.a 00:09:34.219 LIB libspdk_iscsi.a 00:09:34.219 SO libspdk_nvmf.so.20.0 00:09:34.219 SO libspdk_iscsi.so.8.0 00:09:34.219 LIB libspdk_vhost.a 00:09:34.219 SO libspdk_vhost.so.8.0 00:09:34.219 SYMLINK libspdk_vhost.so 00:09:34.219 SYMLINK libspdk_nvmf.so 00:09:34.476 SYMLINK libspdk_iscsi.so 00:09:34.735 CC module/env_dpdk/env_dpdk_rpc.o 00:09:34.735 CC module/accel/ioat/accel_ioat.o 00:09:34.735 CC module/accel/ioat/accel_ioat_rpc.o 00:09:34.735 CC module/blob/bdev/blob_bdev.o 00:09:34.735 CC module/accel/dsa/accel_dsa.o 00:09:34.735 CC module/keyring/file/keyring.o 00:09:34.735 CC module/accel/error/accel_error.o 00:09:34.735 CC module/accel/iaa/accel_iaa.o 00:09:34.735 CC module/keyring/linux/keyring.o 00:09:34.735 CC module/accel/error/accel_error_rpc.o 00:09:34.735 CC module/accel/iaa/accel_iaa_rpc.o 00:09:34.735 CC module/accel/dsa/accel_dsa_rpc.o 00:09:34.735 CC module/sock/posix/posix.o 00:09:34.735 CC module/keyring/linux/keyring_rpc.o 00:09:34.735 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:34.735 CC module/keyring/file/keyring_rpc.o 00:09:34.735 CC module/scheduler/gscheduler/gscheduler.o 00:09:34.735 CC module/fsdev/aio/fsdev_aio.o 00:09:34.735 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:34.735 CC module/fsdev/aio/linux_aio_mgr.o 00:09:34.735 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:34.735 LIB libspdk_env_dpdk_rpc.a 00:09:34.735 SO libspdk_env_dpdk_rpc.so.6.0 00:09:34.993 SYMLINK libspdk_env_dpdk_rpc.so 00:09:34.993 LIB libspdk_keyring_linux.a 00:09:34.993 LIB libspdk_keyring_file.a 00:09:34.993 LIB libspdk_scheduler_gscheduler.a 00:09:34.993 LIB libspdk_scheduler_dpdk_governor.a 00:09:34.993 SO libspdk_keyring_linux.so.1.0 00:09:34.993 SO libspdk_keyring_file.so.2.0 00:09:34.993 SO libspdk_scheduler_gscheduler.so.4.0 00:09:34.993 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:34.993 LIB libspdk_accel_ioat.a 00:09:34.993 LIB libspdk_scheduler_dynamic.a 00:09:34.993 LIB libspdk_accel_error.a 00:09:34.993 LIB libspdk_accel_iaa.a 00:09:34.993 SO libspdk_accel_ioat.so.6.0 00:09:34.993 SYMLINK libspdk_scheduler_gscheduler.so 00:09:34.993 SYMLINK libspdk_keyring_linux.so 00:09:34.993 SO libspdk_scheduler_dynamic.so.4.0 00:09:34.993 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:34.993 SYMLINK libspdk_keyring_file.so 00:09:34.993 SO libspdk_accel_error.so.2.0 00:09:34.993 SO libspdk_accel_iaa.so.3.0 00:09:34.993 SYMLINK libspdk_accel_ioat.so 00:09:34.993 SYMLINK libspdk_scheduler_dynamic.so 00:09:34.993 SYMLINK libspdk_accel_iaa.so 00:09:34.993 LIB libspdk_accel_dsa.a 00:09:34.993 SYMLINK libspdk_accel_error.so 00:09:35.251 LIB libspdk_blob_bdev.a 00:09:35.251 SO libspdk_accel_dsa.so.5.0 00:09:35.251 SO libspdk_blob_bdev.so.12.0 00:09:35.251 SYMLINK libspdk_accel_dsa.so 00:09:35.251 SYMLINK libspdk_blob_bdev.so 00:09:35.517 CC module/bdev/null/bdev_null.o 00:09:35.517 CC module/bdev/error/vbdev_error.o 00:09:35.517 CC module/bdev/null/bdev_null_rpc.o 00:09:35.517 CC module/bdev/gpt/gpt.o 00:09:35.517 CC module/bdev/delay/vbdev_delay.o 00:09:35.517 CC module/bdev/gpt/vbdev_gpt.o 00:09:35.517 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:35.517 CC module/bdev/error/vbdev_error_rpc.o 00:09:35.517 CC module/bdev/malloc/bdev_malloc.o 00:09:35.517 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:35.517 CC module/bdev/passthru/vbdev_passthru.o 00:09:35.517 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:35.517 CC module/bdev/ftl/bdev_ftl.o 00:09:35.517 CC module/bdev/lvol/vbdev_lvol.o 00:09:35.517 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:35.517 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:35.517 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:35.517 CC module/blobfs/bdev/blobfs_bdev.o 00:09:35.517 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:35.517 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:35.517 CC module/bdev/iscsi/bdev_iscsi.o 00:09:35.517 CC module/bdev/raid/bdev_raid.o 00:09:35.517 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:35.517 CC module/bdev/split/vbdev_split.o 00:09:35.517 CC module/bdev/raid/bdev_raid_rpc.o 00:09:35.517 CC module/bdev/nvme/bdev_nvme.o 00:09:35.517 CC module/bdev/aio/bdev_aio.o 00:09:35.517 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:35.517 CC module/bdev/raid/bdev_raid_sb.o 00:09:35.517 CC module/bdev/split/vbdev_split_rpc.o 00:09:35.517 CC module/bdev/aio/bdev_aio_rpc.o 00:09:35.517 CC module/bdev/nvme/nvme_rpc.o 00:09:35.517 CC module/bdev/raid/raid0.o 00:09:35.517 CC module/bdev/nvme/bdev_mdns_client.o 00:09:35.517 CC module/bdev/raid/raid1.o 00:09:35.517 CC module/bdev/nvme/vbdev_opal.o 00:09:35.517 CC module/bdev/raid/concat.o 00:09:35.517 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:35.517 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:35.517 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:35.517 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:35.517 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:35.517 CC module/bdev/ocf/ctx.o 00:09:35.517 CC module/bdev/ocf/data.o 00:09:35.517 CC module/bdev/ocf/stats.o 00:09:35.517 CC module/bdev/ocf/utils.o 00:09:35.776 LIB libspdk_fsdev_aio.a 00:09:35.776 SO libspdk_fsdev_aio.so.1.0 00:09:35.776 LIB libspdk_sock_posix.a 00:09:35.776 CC module/bdev/ocf/vbdev_ocf.o 00:09:35.776 CC module/bdev/ocf/vbdev_ocf_rpc.o 00:09:35.776 CC module/bdev/ocf/volume.o 00:09:35.776 SO libspdk_sock_posix.so.6.0 00:09:35.776 LIB libspdk_blobfs_bdev.a 00:09:35.776 SYMLINK libspdk_fsdev_aio.so 00:09:36.033 SO libspdk_blobfs_bdev.so.6.0 00:09:36.033 LIB libspdk_bdev_split.a 00:09:36.033 SYMLINK libspdk_blobfs_bdev.so 00:09:36.033 SYMLINK libspdk_sock_posix.so 00:09:36.033 SO libspdk_bdev_split.so.6.0 00:09:36.033 LIB libspdk_bdev_gpt.a 00:09:36.033 LIB libspdk_bdev_error.a 00:09:36.033 LIB libspdk_bdev_null.a 00:09:36.033 SYMLINK libspdk_bdev_split.so 00:09:36.033 SO libspdk_bdev_gpt.so.6.0 00:09:36.033 LIB libspdk_bdev_passthru.a 00:09:36.033 SO libspdk_bdev_error.so.6.0 00:09:36.033 LIB libspdk_bdev_ftl.a 00:09:36.033 SO libspdk_bdev_null.so.6.0 00:09:36.034 SO libspdk_bdev_passthru.so.6.0 00:09:36.034 SO libspdk_bdev_ftl.so.6.0 00:09:36.034 LIB libspdk_bdev_zone_block.a 00:09:36.034 SYMLINK libspdk_bdev_gpt.so 00:09:36.034 LIB libspdk_bdev_iscsi.a 00:09:36.034 SO libspdk_bdev_zone_block.so.6.0 00:09:36.034 SYMLINK libspdk_bdev_error.so 00:09:36.034 SYMLINK libspdk_bdev_null.so 00:09:36.034 LIB libspdk_bdev_aio.a 00:09:36.034 SYMLINK libspdk_bdev_passthru.so 00:09:36.034 SO libspdk_bdev_iscsi.so.6.0 00:09:36.034 LIB libspdk_bdev_delay.a 00:09:36.034 LIB libspdk_bdev_malloc.a 00:09:36.034 SYMLINK libspdk_bdev_ftl.so 00:09:36.034 SO libspdk_bdev_aio.so.6.0 00:09:36.034 SO libspdk_bdev_delay.so.6.0 00:09:36.034 SO libspdk_bdev_malloc.so.6.0 00:09:36.291 SYMLINK libspdk_bdev_zone_block.so 00:09:36.291 SYMLINK libspdk_bdev_iscsi.so 00:09:36.291 SYMLINK libspdk_bdev_aio.so 00:09:36.291 SYMLINK libspdk_bdev_delay.so 00:09:36.291 SYMLINK libspdk_bdev_malloc.so 00:09:36.291 LIB libspdk_bdev_lvol.a 00:09:36.291 SO libspdk_bdev_lvol.so.6.0 00:09:36.291 LIB libspdk_bdev_virtio.a 00:09:36.291 SO libspdk_bdev_virtio.so.6.0 00:09:36.291 SYMLINK libspdk_bdev_lvol.so 00:09:36.291 SYMLINK libspdk_bdev_virtio.so 00:09:36.549 LIB libspdk_bdev_ocf.a 00:09:36.549 SO libspdk_bdev_ocf.so.6.0 00:09:36.550 SYMLINK libspdk_bdev_ocf.so 00:09:36.809 LIB libspdk_bdev_raid.a 00:09:36.809 SO libspdk_bdev_raid.so.6.0 00:09:37.068 SYMLINK libspdk_bdev_raid.so 00:09:41.250 LIB libspdk_bdev_nvme.a 00:09:41.250 SO libspdk_bdev_nvme.so.7.1 00:09:41.250 SYMLINK libspdk_bdev_nvme.so 00:09:41.250 CC module/event/subsystems/iobuf/iobuf.o 00:09:41.250 CC module/event/subsystems/scheduler/scheduler.o 00:09:41.250 CC module/event/subsystems/keyring/keyring.o 00:09:41.250 CC module/event/subsystems/fsdev/fsdev.o 00:09:41.250 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:41.250 CC module/event/subsystems/vmd/vmd.o 00:09:41.250 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:41.250 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:41.250 CC module/event/subsystems/sock/sock.o 00:09:41.508 LIB libspdk_event_sock.a 00:09:41.508 LIB libspdk_event_iobuf.a 00:09:41.508 LIB libspdk_event_keyring.a 00:09:41.508 LIB libspdk_event_fsdev.a 00:09:41.508 LIB libspdk_event_vhost_blk.a 00:09:41.508 SO libspdk_event_sock.so.5.0 00:09:41.508 SO libspdk_event_keyring.so.1.0 00:09:41.508 SO libspdk_event_iobuf.so.3.0 00:09:41.508 LIB libspdk_event_scheduler.a 00:09:41.508 LIB libspdk_event_vmd.a 00:09:41.508 SO libspdk_event_fsdev.so.1.0 00:09:41.508 SO libspdk_event_vhost_blk.so.3.0 00:09:41.508 SO libspdk_event_scheduler.so.4.0 00:09:41.508 SO libspdk_event_vmd.so.6.0 00:09:41.509 SYMLINK libspdk_event_keyring.so 00:09:41.509 SYMLINK libspdk_event_sock.so 00:09:41.509 SYMLINK libspdk_event_iobuf.so 00:09:41.509 SYMLINK libspdk_event_fsdev.so 00:09:41.509 SYMLINK libspdk_event_vhost_blk.so 00:09:41.509 SYMLINK libspdk_event_scheduler.so 00:09:41.509 SYMLINK libspdk_event_vmd.so 00:09:41.766 CC module/event/subsystems/accel/accel.o 00:09:42.024 LIB libspdk_event_accel.a 00:09:42.024 SO libspdk_event_accel.so.6.0 00:09:42.282 SYMLINK libspdk_event_accel.so 00:09:42.282 CC module/event/subsystems/bdev/bdev.o 00:09:42.540 LIB libspdk_event_bdev.a 00:09:42.540 SO libspdk_event_bdev.so.6.0 00:09:42.540 SYMLINK libspdk_event_bdev.so 00:09:42.798 CC module/event/subsystems/ublk/ublk.o 00:09:42.798 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:42.798 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:42.798 CC module/event/subsystems/scsi/scsi.o 00:09:42.798 CC module/event/subsystems/nbd/nbd.o 00:09:43.057 LIB libspdk_event_ublk.a 00:09:43.057 LIB libspdk_event_nbd.a 00:09:43.057 SO libspdk_event_nbd.so.6.0 00:09:43.057 SO libspdk_event_ublk.so.3.0 00:09:43.057 SYMLINK libspdk_event_ublk.so 00:09:43.057 LIB libspdk_event_scsi.a 00:09:43.057 SYMLINK libspdk_event_nbd.so 00:09:43.057 SO libspdk_event_scsi.so.6.0 00:09:43.057 LIB libspdk_event_nvmf.a 00:09:43.057 SYMLINK libspdk_event_scsi.so 00:09:43.057 SO libspdk_event_nvmf.so.6.0 00:09:43.315 SYMLINK libspdk_event_nvmf.so 00:09:43.315 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:43.315 CC module/event/subsystems/iscsi/iscsi.o 00:09:43.574 LIB libspdk_event_vhost_scsi.a 00:09:43.574 SO libspdk_event_vhost_scsi.so.3.0 00:09:43.834 LIB libspdk_event_iscsi.a 00:09:43.834 SYMLINK libspdk_event_vhost_scsi.so 00:09:43.834 SO libspdk_event_iscsi.so.6.0 00:09:43.834 SYMLINK libspdk_event_iscsi.so 00:09:43.834 SO libspdk.so.6.0 00:09:43.834 SYMLINK libspdk.so 00:09:44.095 CC app/trace_record/trace_record.o 00:09:44.095 CC app/spdk_nvme_perf/perf.o 00:09:44.095 CXX app/trace/trace.o 00:09:44.095 CC app/spdk_nvme_identify/identify.o 00:09:44.095 CC app/spdk_lspci/spdk_lspci.o 00:09:44.095 CC app/spdk_nvme_discover/discovery_aer.o 00:09:44.095 CC test/rpc_client/rpc_client_test.o 00:09:44.095 TEST_HEADER include/spdk/accel.h 00:09:44.095 CC app/spdk_top/spdk_top.o 00:09:44.095 TEST_HEADER include/spdk/accel_module.h 00:09:44.095 TEST_HEADER include/spdk/assert.h 00:09:44.095 TEST_HEADER include/spdk/barrier.h 00:09:44.095 TEST_HEADER include/spdk/base64.h 00:09:44.095 TEST_HEADER include/spdk/bdev.h 00:09:44.095 TEST_HEADER include/spdk/bdev_module.h 00:09:44.095 TEST_HEADER include/spdk/bdev_zone.h 00:09:44.095 TEST_HEADER include/spdk/bit_array.h 00:09:44.095 TEST_HEADER include/spdk/bit_pool.h 00:09:44.095 TEST_HEADER include/spdk/blob_bdev.h 00:09:44.095 TEST_HEADER include/spdk/blobfs.h 00:09:44.095 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:44.095 TEST_HEADER include/spdk/blob.h 00:09:44.095 TEST_HEADER include/spdk/conf.h 00:09:44.095 TEST_HEADER include/spdk/config.h 00:09:44.095 TEST_HEADER include/spdk/cpuset.h 00:09:44.095 TEST_HEADER include/spdk/crc16.h 00:09:44.095 TEST_HEADER include/spdk/crc32.h 00:09:44.095 TEST_HEADER include/spdk/crc64.h 00:09:44.095 TEST_HEADER include/spdk/dif.h 00:09:44.095 TEST_HEADER include/spdk/dma.h 00:09:44.095 TEST_HEADER include/spdk/endian.h 00:09:44.095 TEST_HEADER include/spdk/env_dpdk.h 00:09:44.095 TEST_HEADER include/spdk/env.h 00:09:44.095 TEST_HEADER include/spdk/event.h 00:09:44.095 TEST_HEADER include/spdk/fd_group.h 00:09:44.095 TEST_HEADER include/spdk/fd.h 00:09:44.095 TEST_HEADER include/spdk/file.h 00:09:44.095 TEST_HEADER include/spdk/fsdev.h 00:09:44.095 TEST_HEADER include/spdk/fsdev_module.h 00:09:44.095 TEST_HEADER include/spdk/ftl.h 00:09:44.095 TEST_HEADER include/spdk/gpt_spec.h 00:09:44.095 TEST_HEADER include/spdk/hexlify.h 00:09:44.095 TEST_HEADER include/spdk/histogram_data.h 00:09:44.095 TEST_HEADER include/spdk/idxd.h 00:09:44.095 TEST_HEADER include/spdk/idxd_spec.h 00:09:44.095 TEST_HEADER include/spdk/init.h 00:09:44.095 TEST_HEADER include/spdk/ioat.h 00:09:44.095 TEST_HEADER include/spdk/ioat_spec.h 00:09:44.095 TEST_HEADER include/spdk/iscsi_spec.h 00:09:44.095 TEST_HEADER include/spdk/json.h 00:09:44.095 TEST_HEADER include/spdk/jsonrpc.h 00:09:44.095 TEST_HEADER include/spdk/keyring.h 00:09:44.095 TEST_HEADER include/spdk/keyring_module.h 00:09:44.095 TEST_HEADER include/spdk/likely.h 00:09:44.095 TEST_HEADER include/spdk/log.h 00:09:44.095 TEST_HEADER include/spdk/lvol.h 00:09:44.095 TEST_HEADER include/spdk/md5.h 00:09:44.095 TEST_HEADER include/spdk/memory.h 00:09:44.095 TEST_HEADER include/spdk/mmio.h 00:09:44.095 TEST_HEADER include/spdk/nbd.h 00:09:44.095 TEST_HEADER include/spdk/net.h 00:09:44.095 TEST_HEADER include/spdk/notify.h 00:09:44.095 TEST_HEADER include/spdk/nvme.h 00:09:44.095 TEST_HEADER include/spdk/nvme_intel.h 00:09:44.095 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:44.095 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:44.095 TEST_HEADER include/spdk/nvme_zns.h 00:09:44.095 TEST_HEADER include/spdk/nvme_spec.h 00:09:44.095 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:44.095 TEST_HEADER include/spdk/nvmf.h 00:09:44.095 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:44.095 TEST_HEADER include/spdk/nvmf_transport.h 00:09:44.095 TEST_HEADER include/spdk/nvmf_spec.h 00:09:44.095 TEST_HEADER include/spdk/opal.h 00:09:44.095 TEST_HEADER include/spdk/opal_spec.h 00:09:44.095 TEST_HEADER include/spdk/pci_ids.h 00:09:44.095 TEST_HEADER include/spdk/pipe.h 00:09:44.095 TEST_HEADER include/spdk/queue.h 00:09:44.095 TEST_HEADER include/spdk/reduce.h 00:09:44.095 TEST_HEADER include/spdk/rpc.h 00:09:44.095 TEST_HEADER include/spdk/scheduler.h 00:09:44.096 TEST_HEADER include/spdk/scsi.h 00:09:44.096 TEST_HEADER include/spdk/scsi_spec.h 00:09:44.096 TEST_HEADER include/spdk/sock.h 00:09:44.096 TEST_HEADER include/spdk/stdinc.h 00:09:44.096 TEST_HEADER include/spdk/string.h 00:09:44.096 TEST_HEADER include/spdk/thread.h 00:09:44.096 TEST_HEADER include/spdk/trace.h 00:09:44.096 TEST_HEADER include/spdk/trace_parser.h 00:09:44.096 TEST_HEADER include/spdk/tree.h 00:09:44.096 TEST_HEADER include/spdk/ublk.h 00:09:44.096 TEST_HEADER include/spdk/util.h 00:09:44.096 TEST_HEADER include/spdk/uuid.h 00:09:44.096 TEST_HEADER include/spdk/version.h 00:09:44.096 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:44.096 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:44.096 TEST_HEADER include/spdk/vhost.h 00:09:44.096 TEST_HEADER include/spdk/vmd.h 00:09:44.360 TEST_HEADER include/spdk/xor.h 00:09:44.360 TEST_HEADER include/spdk/zipf.h 00:09:44.360 CXX test/cpp_headers/accel.o 00:09:44.360 CXX test/cpp_headers/accel_module.o 00:09:44.360 CXX test/cpp_headers/assert.o 00:09:44.360 CXX test/cpp_headers/barrier.o 00:09:44.360 CXX test/cpp_headers/base64.o 00:09:44.360 CXX test/cpp_headers/bdev.o 00:09:44.360 CXX test/cpp_headers/bdev_module.o 00:09:44.360 CXX test/cpp_headers/bdev_zone.o 00:09:44.360 CXX test/cpp_headers/bit_array.o 00:09:44.360 CXX test/cpp_headers/bit_pool.o 00:09:44.360 CXX test/cpp_headers/blob_bdev.o 00:09:44.360 CXX test/cpp_headers/blobfs_bdev.o 00:09:44.360 CXX test/cpp_headers/blobfs.o 00:09:44.360 CXX test/cpp_headers/blob.o 00:09:44.360 CXX test/cpp_headers/conf.o 00:09:44.360 CXX test/cpp_headers/config.o 00:09:44.360 CXX test/cpp_headers/cpuset.o 00:09:44.360 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:44.360 CXX test/cpp_headers/crc16.o 00:09:44.360 CC app/spdk_dd/spdk_dd.o 00:09:44.360 CC app/iscsi_tgt/iscsi_tgt.o 00:09:44.360 CC app/nvmf_tgt/nvmf_main.o 00:09:44.360 CXX test/cpp_headers/crc32.o 00:09:44.360 CC examples/ioat/perf/perf.o 00:09:44.360 CC examples/ioat/verify/verify.o 00:09:44.360 CC test/thread/poller_perf/poller_perf.o 00:09:44.360 CC examples/util/zipf/zipf.o 00:09:44.360 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:44.360 CC test/app/histogram_perf/histogram_perf.o 00:09:44.360 CC test/env/memory/memory_ut.o 00:09:44.360 CC test/env/vtophys/vtophys.o 00:09:44.360 CC app/fio/nvme/fio_plugin.o 00:09:44.360 CC test/app/stub/stub.o 00:09:44.360 CC app/spdk_tgt/spdk_tgt.o 00:09:44.360 CC test/app/jsoncat/jsoncat.o 00:09:44.360 CC test/env/pci/pci_ut.o 00:09:44.360 CC test/dma/test_dma/test_dma.o 00:09:44.360 CC test/app/bdev_svc/bdev_svc.o 00:09:44.360 CC app/fio/bdev/fio_plugin.o 00:09:44.360 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:44.360 LINK spdk_lspci 00:09:44.360 CC test/env/mem_callbacks/mem_callbacks.o 00:09:44.622 LINK rpc_client_test 00:09:44.622 LINK spdk_nvme_discover 00:09:44.622 LINK jsoncat 00:09:44.622 CXX test/cpp_headers/crc64.o 00:09:44.622 LINK poller_perf 00:09:44.622 CXX test/cpp_headers/dif.o 00:09:44.622 LINK histogram_perf 00:09:44.622 LINK interrupt_tgt 00:09:44.622 LINK spdk_trace_record 00:09:44.622 LINK vtophys 00:09:44.622 LINK env_dpdk_post_init 00:09:44.622 CXX test/cpp_headers/dma.o 00:09:44.622 LINK zipf 00:09:44.622 LINK nvmf_tgt 00:09:44.622 CXX test/cpp_headers/endian.o 00:09:44.622 CXX test/cpp_headers/env_dpdk.o 00:09:44.622 CXX test/cpp_headers/env.o 00:09:44.622 CXX test/cpp_headers/event.o 00:09:44.622 LINK stub 00:09:44.622 CXX test/cpp_headers/fd_group.o 00:09:44.622 CXX test/cpp_headers/fd.o 00:09:44.622 CXX test/cpp_headers/file.o 00:09:44.622 CXX test/cpp_headers/fsdev.o 00:09:44.622 LINK iscsi_tgt 00:09:44.622 CXX test/cpp_headers/fsdev_module.o 00:09:44.622 CXX test/cpp_headers/ftl.o 00:09:44.622 LINK verify 00:09:44.885 CXX test/cpp_headers/gpt_spec.o 00:09:44.885 LINK spdk_tgt 00:09:44.885 CXX test/cpp_headers/hexlify.o 00:09:44.885 LINK bdev_svc 00:09:44.885 LINK ioat_perf 00:09:44.885 CXX test/cpp_headers/histogram_data.o 00:09:44.885 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:44.885 CXX test/cpp_headers/idxd.o 00:09:44.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:44.885 CXX test/cpp_headers/idxd_spec.o 00:09:44.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:44.885 CXX test/cpp_headers/init.o 00:09:44.885 CXX test/cpp_headers/ioat.o 00:09:44.885 CXX test/cpp_headers/ioat_spec.o 00:09:44.885 CXX test/cpp_headers/iscsi_spec.o 00:09:44.885 CXX test/cpp_headers/json.o 00:09:44.885 CXX test/cpp_headers/jsonrpc.o 00:09:45.145 CXX test/cpp_headers/keyring.o 00:09:45.145 LINK spdk_trace 00:09:45.145 CXX test/cpp_headers/keyring_module.o 00:09:45.145 CXX test/cpp_headers/likely.o 00:09:45.145 LINK spdk_dd 00:09:45.145 CXX test/cpp_headers/log.o 00:09:45.145 CXX test/cpp_headers/lvol.o 00:09:45.145 CXX test/cpp_headers/md5.o 00:09:45.145 CXX test/cpp_headers/memory.o 00:09:45.145 LINK pci_ut 00:09:45.145 CXX test/cpp_headers/mmio.o 00:09:45.145 CXX test/cpp_headers/nbd.o 00:09:45.145 CXX test/cpp_headers/net.o 00:09:45.145 CXX test/cpp_headers/notify.o 00:09:45.145 CXX test/cpp_headers/nvme.o 00:09:45.145 CXX test/cpp_headers/nvme_intel.o 00:09:45.145 CXX test/cpp_headers/nvme_ocssd.o 00:09:45.145 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:45.145 CXX test/cpp_headers/nvme_spec.o 00:09:45.145 CXX test/cpp_headers/nvme_zns.o 00:09:45.145 LINK nvme_fuzz 00:09:45.145 CC test/event/event_perf/event_perf.o 00:09:45.409 CC test/event/reactor_perf/reactor_perf.o 00:09:45.409 CC test/event/reactor/reactor.o 00:09:45.409 CXX test/cpp_headers/nvmf_cmd.o 00:09:45.409 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:45.409 CC examples/sock/hello_world/hello_sock.o 00:09:45.409 LINK test_dma 00:09:45.409 CC examples/vmd/lsvmd/lsvmd.o 00:09:45.409 CXX test/cpp_headers/nvmf.o 00:09:45.409 CXX test/cpp_headers/nvmf_spec.o 00:09:45.409 CC test/event/app_repeat/app_repeat.o 00:09:45.409 CC examples/idxd/perf/perf.o 00:09:45.409 LINK spdk_bdev 00:09:45.409 CXX test/cpp_headers/nvmf_transport.o 00:09:45.409 CC examples/thread/thread/thread_ex.o 00:09:45.409 LINK spdk_nvme 00:09:45.409 CXX test/cpp_headers/opal.o 00:09:45.409 CXX test/cpp_headers/opal_spec.o 00:09:45.409 CXX test/cpp_headers/pci_ids.o 00:09:45.409 CC examples/vmd/led/led.o 00:09:45.409 CXX test/cpp_headers/pipe.o 00:09:45.409 CXX test/cpp_headers/queue.o 00:09:45.668 CXX test/cpp_headers/reduce.o 00:09:45.668 CXX test/cpp_headers/rpc.o 00:09:45.668 CXX test/cpp_headers/scheduler.o 00:09:45.668 CXX test/cpp_headers/scsi.o 00:09:45.668 CXX test/cpp_headers/scsi_spec.o 00:09:45.668 CXX test/cpp_headers/sock.o 00:09:45.668 LINK reactor_perf 00:09:45.668 CXX test/cpp_headers/stdinc.o 00:09:45.668 CC test/event/scheduler/scheduler.o 00:09:45.668 LINK event_perf 00:09:45.668 LINK reactor 00:09:45.668 CXX test/cpp_headers/string.o 00:09:45.668 CXX test/cpp_headers/thread.o 00:09:45.668 LINK lsvmd 00:09:45.668 CXX test/cpp_headers/trace.o 00:09:45.668 CXX test/cpp_headers/trace_parser.o 00:09:45.668 CXX test/cpp_headers/tree.o 00:09:45.668 CC app/vhost/vhost.o 00:09:45.668 CXX test/cpp_headers/ublk.o 00:09:45.668 CXX test/cpp_headers/util.o 00:09:45.668 CXX test/cpp_headers/uuid.o 00:09:45.668 LINK mem_callbacks 00:09:45.668 LINK app_repeat 00:09:45.668 CXX test/cpp_headers/version.o 00:09:45.668 LINK vhost_fuzz 00:09:45.668 CXX test/cpp_headers/vfio_user_pci.o 00:09:45.668 LINK spdk_nvme_perf 00:09:45.668 LINK spdk_nvme_identify 00:09:45.668 CXX test/cpp_headers/vfio_user_spec.o 00:09:45.668 CXX test/cpp_headers/vhost.o 00:09:45.668 CXX test/cpp_headers/vmd.o 00:09:45.668 CXX test/cpp_headers/xor.o 00:09:45.930 CXX test/cpp_headers/zipf.o 00:09:45.930 LINK led 00:09:45.930 LINK hello_sock 00:09:45.930 LINK spdk_top 00:09:45.930 LINK thread 00:09:45.930 LINK vhost 00:09:45.930 LINK idxd_perf 00:09:45.930 LINK scheduler 00:09:46.188 CC examples/nvme/reconnect/reconnect.o 00:09:46.188 CC examples/nvme/arbitration/arbitration.o 00:09:46.188 CC examples/nvme/abort/abort.o 00:09:46.188 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:46.188 CC examples/nvme/hello_world/hello_world.o 00:09:46.188 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:46.188 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:46.188 CC examples/nvme/hotplug/hotplug.o 00:09:46.447 CC test/nvme/aer/aer.o 00:09:46.447 CC test/nvme/overhead/overhead.o 00:09:46.447 CC test/nvme/cuse/cuse.o 00:09:46.447 CC test/nvme/startup/startup.o 00:09:46.447 CC test/nvme/reserve/reserve.o 00:09:46.447 CC test/nvme/simple_copy/simple_copy.o 00:09:46.447 CC test/nvme/fused_ordering/fused_ordering.o 00:09:46.447 CC test/nvme/sgl/sgl.o 00:09:46.447 CC test/nvme/e2edp/nvme_dp.o 00:09:46.447 CC test/nvme/compliance/nvme_compliance.o 00:09:46.447 CC test/nvme/reset/reset.o 00:09:46.447 CC test/nvme/connect_stress/connect_stress.o 00:09:46.447 CC test/nvme/boot_partition/boot_partition.o 00:09:46.447 CC test/nvme/err_injection/err_injection.o 00:09:46.447 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:46.447 CC test/nvme/fdp/fdp.o 00:09:46.447 LINK memory_ut 00:09:46.447 CC test/blobfs/mkfs/mkfs.o 00:09:46.447 CC test/accel/dif/dif.o 00:09:46.447 CC test/lvol/esnap/esnap.o 00:09:46.447 LINK cmb_copy 00:09:46.710 LINK pmr_persistence 00:09:46.710 LINK hotplug 00:09:46.710 LINK boot_partition 00:09:46.710 LINK err_injection 00:09:46.710 LINK hello_world 00:09:46.710 LINK arbitration 00:09:46.710 LINK mkfs 00:09:46.710 CC examples/accel/perf/accel_perf.o 00:09:46.710 CC examples/blob/hello_world/hello_blob.o 00:09:46.710 LINK sgl 00:09:46.710 LINK fused_ordering 00:09:46.710 CC examples/blob/cli/blobcli.o 00:09:46.710 LINK startup 00:09:46.710 LINK connect_stress 00:09:46.710 LINK reconnect 00:09:46.710 LINK doorbell_aers 00:09:46.710 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:46.710 LINK reserve 00:09:46.710 LINK aer 00:09:46.710 LINK overhead 00:09:46.969 LINK nvme_dp 00:09:46.969 LINK simple_copy 00:09:46.969 LINK reset 00:09:46.969 LINK abort 00:09:46.969 LINK fdp 00:09:46.969 LINK nvme_manage 00:09:46.969 LINK nvme_compliance 00:09:47.227 LINK hello_blob 00:09:47.227 LINK hello_fsdev 00:09:47.227 LINK dif 00:09:47.227 LINK accel_perf 00:09:47.227 LINK blobcli 00:09:47.486 LINK iscsi_fuzz 00:09:47.744 CC examples/bdev/hello_world/hello_bdev.o 00:09:47.744 CC examples/bdev/bdevperf/bdevperf.o 00:09:48.001 LINK hello_bdev 00:09:48.001 CC test/bdev/bdevio/bdevio.o 00:09:48.259 LINK cuse 00:09:48.517 LINK bdevperf 00:09:48.517 LINK bdevio 00:09:49.084 CC examples/nvmf/nvmf/nvmf.o 00:09:49.651 LINK nvmf 00:09:52.935 LINK esnap 00:09:53.502 00:09:53.502 real 1m23.293s 00:09:53.502 user 12m22.522s 00:09:53.502 sys 2m35.814s 00:09:53.502 23:51:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:53.502 23:51:31 make -- common/autotest_common.sh@10 -- $ set +x 00:09:53.502 ************************************ 00:09:53.502 END TEST make 00:09:53.502 ************************************ 00:09:53.502 23:51:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:53.502 23:51:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:53.502 23:51:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:53.502 23:51:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.502 23:51:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:53.502 23:51:31 -- pm/common@44 -- $ pid=370043 00:09:53.502 23:51:31 -- pm/common@50 -- $ kill -TERM 370043 00:09:53.502 23:51:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.502 23:51:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:53.502 23:51:31 -- pm/common@44 -- $ pid=370045 00:09:53.502 23:51:31 -- pm/common@50 -- $ kill -TERM 370045 00:09:53.502 23:51:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.502 23:51:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:53.502 23:51:31 -- pm/common@44 -- $ pid=370047 00:09:53.502 23:51:31 -- pm/common@50 -- $ kill -TERM 370047 00:09:53.502 23:51:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.502 23:51:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:53.502 23:51:31 -- pm/common@44 -- $ pid=370079 00:09:53.502 23:51:31 -- pm/common@50 -- $ sudo -E kill -TERM 370079 00:09:53.502 23:51:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:53.502 23:51:31 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:09:53.502 23:51:31 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:53.502 23:51:31 -- common/autotest_common.sh@1711 -- # lcov --version 00:09:53.502 23:51:31 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:53.502 23:51:31 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:53.502 23:51:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.502 23:51:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.502 23:51:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.502 23:51:31 -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.502 23:51:31 -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.502 23:51:31 -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.502 23:51:31 -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.502 23:51:31 -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.502 23:51:31 -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.502 23:51:31 -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.502 23:51:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.502 23:51:31 -- scripts/common.sh@344 -- # case "$op" in 00:09:53.502 23:51:31 -- scripts/common.sh@345 -- # : 1 00:09:53.502 23:51:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.502 23:51:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.502 23:51:31 -- scripts/common.sh@365 -- # decimal 1 00:09:53.502 23:51:31 -- scripts/common.sh@353 -- # local d=1 00:09:53.502 23:51:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.502 23:51:31 -- scripts/common.sh@355 -- # echo 1 00:09:53.502 23:51:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.502 23:51:31 -- scripts/common.sh@366 -- # decimal 2 00:09:53.502 23:51:31 -- scripts/common.sh@353 -- # local d=2 00:09:53.502 23:51:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.502 23:51:31 -- scripts/common.sh@355 -- # echo 2 00:09:53.502 23:51:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.502 23:51:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.502 23:51:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.502 23:51:31 -- scripts/common.sh@368 -- # return 0 00:09:53.502 23:51:31 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.502 23:51:31 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:53.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.502 --rc genhtml_branch_coverage=1 00:09:53.502 --rc genhtml_function_coverage=1 00:09:53.502 --rc genhtml_legend=1 00:09:53.502 --rc geninfo_all_blocks=1 00:09:53.502 --rc geninfo_unexecuted_blocks=1 00:09:53.502 00:09:53.502 ' 00:09:53.502 23:51:31 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:53.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.502 --rc genhtml_branch_coverage=1 00:09:53.503 --rc genhtml_function_coverage=1 00:09:53.503 --rc genhtml_legend=1 00:09:53.503 --rc geninfo_all_blocks=1 00:09:53.503 --rc geninfo_unexecuted_blocks=1 00:09:53.503 00:09:53.503 ' 00:09:53.503 23:51:31 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:53.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.503 --rc genhtml_branch_coverage=1 00:09:53.503 --rc genhtml_function_coverage=1 00:09:53.503 --rc genhtml_legend=1 00:09:53.503 --rc geninfo_all_blocks=1 00:09:53.503 --rc geninfo_unexecuted_blocks=1 00:09:53.503 00:09:53.503 ' 00:09:53.503 23:51:31 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:53.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.503 --rc genhtml_branch_coverage=1 00:09:53.503 --rc genhtml_function_coverage=1 00:09:53.503 --rc genhtml_legend=1 00:09:53.503 --rc geninfo_all_blocks=1 00:09:53.503 --rc geninfo_unexecuted_blocks=1 00:09:53.503 00:09:53.503 ' 00:09:53.503 23:51:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.503 23:51:31 -- nvmf/common.sh@7 -- # uname -s 00:09:53.503 23:51:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.503 23:51:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.503 23:51:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.503 23:51:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.503 23:51:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.503 23:51:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.503 23:51:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.503 23:51:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.503 23:51:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.503 23:51:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.503 23:51:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a882507-757a-e411-bc42-001e67d39171 00:09:53.503 23:51:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=4a882507-757a-e411-bc42-001e67d39171 00:09:53.503 23:51:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.503 23:51:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.503 23:51:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:53.503 23:51:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.503 23:51:32 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:09:53.503 23:51:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.503 23:51:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.503 23:51:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.503 23:51:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.503 23:51:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.503 23:51:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.503 23:51:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.503 23:51:32 -- paths/export.sh@5 -- # export PATH 00:09:53.503 23:51:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.503 23:51:32 -- nvmf/common.sh@51 -- # : 0 00:09:53.503 23:51:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.503 23:51:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.503 23:51:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.503 23:51:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.503 23:51:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.503 23:51:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.503 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.503 23:51:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.503 23:51:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.503 23:51:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.503 23:51:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:53.503 23:51:32 -- spdk/autotest.sh@32 -- # uname -s 00:09:53.503 23:51:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:53.503 23:51:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:53.503 23:51:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps 00:09:53.503 23:51:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:09:53.503 23:51:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps 00:09:53.503 23:51:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:53.761 23:51:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:53.761 23:51:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:53.761 23:51:32 -- spdk/autotest.sh@48 -- # udevadm_pid=448492 00:09:53.762 23:51:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:53.762 23:51:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:53.762 23:51:32 -- pm/common@17 -- # local monitor 00:09:53.762 23:51:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.762 23:51:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.762 23:51:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.762 23:51:32 -- pm/common@21 -- # date +%s 00:09:53.762 23:51:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:53.762 23:51:32 -- pm/common@21 -- # date +%s 00:09:53.762 23:51:32 -- pm/common@25 -- # sleep 1 00:09:53.762 23:51:32 -- pm/common@21 -- # date +%s 00:09:53.762 23:51:32 -- pm/common@21 -- # date +%s 00:09:53.762 23:51:32 -- pm/common@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784692 00:09:53.762 23:51:32 -- pm/common@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784692 00:09:53.762 23:51:32 -- pm/common@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784692 00:09:53.762 23:51:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784692 00:09:53.762 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784692_collect-vmstat.pm.log 00:09:53.762 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784692_collect-cpu-load.pm.log 00:09:53.762 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784692_collect-cpu-temp.pm.log 00:09:53.762 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784692_collect-bmc-pm.bmc.pm.log 00:09:54.696 23:51:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:54.696 23:51:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:54.696 23:51:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.696 23:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.696 23:51:33 -- spdk/autotest.sh@59 -- # create_test_list 00:09:54.696 23:51:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:54.696 23:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.696 23:51:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh 00:09:54.696 23:51:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk 00:09:54.696 23:51:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:09:54.696 23:51:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:09:54.696 23:51:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:09:54.696 23:51:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:54.696 23:51:33 -- common/autotest_common.sh@1457 -- # uname 00:09:54.696 23:51:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:54.696 23:51:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:54.697 23:51:33 -- common/autotest_common.sh@1477 -- # uname 00:09:54.697 23:51:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:54.697 23:51:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:54.697 23:51:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:54.697 lcov: LCOV version 1.15 00:09:54.697 23:51:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvme-phy-autotest/spdk -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info 00:10:33.405 /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:33.405 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:11:20.079 23:52:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:11:20.079 23:52:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.079 23:52:51 -- common/autotest_common.sh@10 -- # set +x 00:11:20.079 23:52:51 -- spdk/autotest.sh@78 -- # rm -f 00:11:20.079 23:52:51 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:11:20.079 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:11:20.079 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:11:20.079 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:11:20.079 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:11:20.079 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:11:20.079 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:11:20.079 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:11:20.079 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:11:20.079 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:11:20.079 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:11:20.079 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:11:20.079 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:11:20.079 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:11:20.079 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:11:20.079 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:11:20.079 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:11:20.079 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:11:20.079 23:52:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:11:20.079 23:52:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:20.079 23:52:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:20.079 23:52:53 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:11:20.079 23:52:53 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:11:20.079 23:52:53 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:11:20.079 23:52:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:20.079 23:52:53 -- common/autotest_common.sh@1669 -- # bdf=0000:84:00.0 00:11:20.079 23:52:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:20.079 23:52:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:11:20.079 23:52:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:20.079 23:52:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:20.079 23:52:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:20.079 23:52:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:11:20.079 23:52:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:20.079 23:52:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:20.079 23:52:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:11:20.079 23:52:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:20.079 23:52:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:20.079 No valid GPT data, bailing 00:11:20.079 23:52:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:20.079 23:52:53 -- scripts/common.sh@394 -- # pt= 00:11:20.079 23:52:53 -- scripts/common.sh@395 -- # return 1 00:11:20.079 23:52:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:20.079 1+0 records in 00:11:20.079 1+0 records out 00:11:20.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615575 s, 170 MB/s 00:11:20.079 23:52:53 -- spdk/autotest.sh@105 -- # sync 00:11:20.079 23:52:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:20.079 23:52:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:20.079 23:52:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:20.079 23:52:56 -- spdk/autotest.sh@111 -- # uname -s 00:11:20.079 23:52:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:11:20.079 23:52:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:11:20.079 23:52:56 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:11:20.079 Hugepages 00:11:20.079 node hugesize free / total 00:11:20.079 node0 1048576kB 0 / 0 00:11:20.079 node0 2048kB 0 / 0 00:11:20.079 node1 1048576kB 0 / 0 00:11:20.079 node1 2048kB 0 / 0 00:11:20.079 00:11:20.079 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:20.079 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:11:20.079 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:11:20.079 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:11:20.079 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:11:20.079 23:52:57 -- spdk/autotest.sh@117 -- # uname -s 00:11:20.079 23:52:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:11:20.079 23:52:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:11:20.079 23:52:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:11:20.079 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:11:20.079 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:11:20.079 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:11:20.337 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:11:20.337 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:11:20.337 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:11:20.337 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:11:20.337 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:11:20.337 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:11:21.272 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:11:21.272 23:52:59 -- common/autotest_common.sh@1517 -- # sleep 1 00:11:22.205 23:53:00 -- common/autotest_common.sh@1518 -- # bdfs=() 00:11:22.205 23:53:00 -- common/autotest_common.sh@1518 -- # local bdfs 00:11:22.205 23:53:00 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:22.205 23:53:00 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:22.205 23:53:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:22.205 23:53:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:22.205 23:53:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:22.205 23:53:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:22.205 23:53:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:22.205 23:53:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:22.205 23:53:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:11:22.205 23:53:00 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:11:23.579 Waiting for block devices as requested 00:11:23.579 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:11:23.837 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:11:23.837 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:11:24.096 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:11:24.096 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:11:24.096 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:11:24.353 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:11:24.353 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:11:24.353 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:11:24.353 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:11:24.611 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:11:24.611 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:11:24.611 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:11:24.611 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:11:24.869 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:11:24.869 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:11:24.869 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:11:24.869 23:53:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:24.869 23:53:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1487 -- # grep 0000:84:00.0/nvme/nvme 00:11:25.128 23:53:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:11:25.128 23:53:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:11:25.128 23:53:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:25.128 23:53:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:25.128 23:53:03 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:11:25.128 23:53:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:25.128 23:53:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:25.128 23:53:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:11:25.128 23:53:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:25.128 23:53:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:25.128 23:53:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:25.128 23:53:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:25.128 23:53:03 -- common/autotest_common.sh@1543 -- # continue 00:11:25.128 23:53:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:11:25.128 23:53:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.128 23:53:03 -- common/autotest_common.sh@10 -- # set +x 00:11:25.128 23:53:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:11:25.128 23:53:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.128 23:53:03 -- common/autotest_common.sh@10 -- # set +x 00:11:25.128 23:53:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:11:26.502 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:11:26.502 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:11:26.502 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:11:27.436 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:11:27.436 23:53:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:11:27.436 23:53:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.436 23:53:05 -- common/autotest_common.sh@10 -- # set +x 00:11:27.436 23:53:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:11:27.436 23:53:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:11:27.436 23:53:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:11:27.436 23:53:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:11:27.436 23:53:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:11:27.436 23:53:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:11:27.436 23:53:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:11:27.436 23:53:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:11:27.436 23:53:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:27.436 23:53:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:27.436 23:53:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:27.436 23:53:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:27.436 23:53:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:27.436 23:53:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:27.436 23:53:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:11:27.436 23:53:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:27.436 23:53:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:11:27.436 23:53:05 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:11:27.436 23:53:05 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:11:27.436 23:53:05 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:11:27.436 23:53:05 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:11:27.436 23:53:05 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:84:00.0 00:11:27.695 23:53:05 -- common/autotest_common.sh@1579 -- # [[ -z 0000:84:00.0 ]] 00:11:27.695 23:53:05 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=463465 00:11:27.695 23:53:05 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:27.695 23:53:05 -- common/autotest_common.sh@1585 -- # waitforlisten 463465 00:11:27.695 23:53:05 -- common/autotest_common.sh@835 -- # '[' -z 463465 ']' 00:11:27.695 23:53:05 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.695 23:53:05 -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.695 23:53:05 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.695 23:53:05 -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.695 23:53:05 -- common/autotest_common.sh@10 -- # set +x 00:11:27.695 [2024-12-09 23:53:06.033015] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:27.695 [2024-12-09 23:53:06.033127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463465 ] 00:11:27.695 [2024-12-09 23:53:06.112878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.695 [2024-12-09 23:53:06.172573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.261 [2024-12-09 23:53:06.501074] 'OCF_Core' volume operations registered 00:11:28.261 [2024-12-09 23:53:06.501171] 'OCF_Cache' volume operations registered 00:11:28.261 [2024-12-09 23:53:06.508821] 'OCF Composite' volume operations registered 00:11:28.261 [2024-12-09 23:53:06.515110] 'SPDK_block_device' volume operations registered 00:11:28.261 23:53:06 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.261 23:53:06 -- common/autotest_common.sh@868 -- # return 0 00:11:28.261 23:53:06 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:11:28.261 23:53:06 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:11:28.261 23:53:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:11:31.544 nvme0n1 00:11:31.544 23:53:09 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:11:32.111 [2024-12-09 23:53:10.490980] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:11:32.111 request: 00:11:32.111 { 00:11:32.111 "nvme_ctrlr_name": "nvme0", 00:11:32.111 "password": "test", 00:11:32.111 "method": "bdev_nvme_opal_revert", 00:11:32.111 "req_id": 1 00:11:32.111 } 00:11:32.111 Got JSON-RPC error response 00:11:32.111 response: 00:11:32.111 { 00:11:32.111 "code": -32602, 00:11:32.111 "message": "Invalid parameters" 00:11:32.111 } 00:11:32.111 23:53:10 -- common/autotest_common.sh@1591 -- # true 00:11:32.111 23:53:10 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:11:32.111 23:53:10 -- common/autotest_common.sh@1595 -- # killprocess 463465 00:11:32.111 23:53:10 -- common/autotest_common.sh@954 -- # '[' -z 463465 ']' 00:11:32.111 23:53:10 -- common/autotest_common.sh@958 -- # kill -0 463465 00:11:32.111 23:53:10 -- common/autotest_common.sh@959 -- # uname 00:11:32.111 23:53:10 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.111 23:53:10 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463465 00:11:32.111 23:53:10 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.111 23:53:10 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.111 23:53:10 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463465' 00:11:32.111 killing process with pid 463465 00:11:32.111 23:53:10 -- common/autotest_common.sh@973 -- # kill 463465 00:11:32.111 23:53:10 -- common/autotest_common.sh@978 -- # wait 463465 00:11:34.640 23:53:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:11:34.640 23:53:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:11:34.640 23:53:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:34.640 23:53:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:34.640 23:53:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:11:34.640 23:53:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.640 23:53:12 -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 23:53:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:11:34.640 23:53:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh 00:11:34.640 23:53:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.640 23:53:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.640 23:53:12 -- common/autotest_common.sh@10 -- # set +x 00:11:34.640 ************************************ 00:11:34.640 START TEST env 00:11:34.640 ************************************ 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh 00:11:34.640 * Looking for test storage... 00:11:34.640 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.640 23:53:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.640 23:53:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.640 23:53:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.640 23:53:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.640 23:53:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.640 23:53:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.640 23:53:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.640 23:53:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.640 23:53:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.640 23:53:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.640 23:53:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.640 23:53:12 env -- scripts/common.sh@344 -- # case "$op" in 00:11:34.640 23:53:12 env -- scripts/common.sh@345 -- # : 1 00:11:34.640 23:53:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.640 23:53:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.640 23:53:12 env -- scripts/common.sh@365 -- # decimal 1 00:11:34.640 23:53:12 env -- scripts/common.sh@353 -- # local d=1 00:11:34.640 23:53:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.640 23:53:12 env -- scripts/common.sh@355 -- # echo 1 00:11:34.640 23:53:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.640 23:53:12 env -- scripts/common.sh@366 -- # decimal 2 00:11:34.640 23:53:12 env -- scripts/common.sh@353 -- # local d=2 00:11:34.640 23:53:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.640 23:53:12 env -- scripts/common.sh@355 -- # echo 2 00:11:34.640 23:53:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.640 23:53:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.640 23:53:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.640 23:53:12 env -- scripts/common.sh@368 -- # return 0 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.640 --rc genhtml_branch_coverage=1 00:11:34.640 --rc genhtml_function_coverage=1 00:11:34.640 --rc genhtml_legend=1 00:11:34.640 --rc geninfo_all_blocks=1 00:11:34.640 --rc geninfo_unexecuted_blocks=1 00:11:34.640 00:11:34.640 ' 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.640 --rc genhtml_branch_coverage=1 00:11:34.640 --rc genhtml_function_coverage=1 00:11:34.640 --rc genhtml_legend=1 00:11:34.640 --rc geninfo_all_blocks=1 00:11:34.640 --rc geninfo_unexecuted_blocks=1 00:11:34.640 00:11:34.640 ' 00:11:34.640 23:53:12 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.640 --rc genhtml_branch_coverage=1 00:11:34.640 --rc genhtml_function_coverage=1 00:11:34.641 --rc genhtml_legend=1 00:11:34.641 --rc geninfo_all_blocks=1 00:11:34.641 --rc geninfo_unexecuted_blocks=1 00:11:34.641 00:11:34.641 ' 00:11:34.641 23:53:12 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.641 --rc genhtml_branch_coverage=1 00:11:34.641 --rc genhtml_function_coverage=1 00:11:34.641 --rc genhtml_legend=1 00:11:34.641 --rc geninfo_all_blocks=1 00:11:34.641 --rc geninfo_unexecuted_blocks=1 00:11:34.641 00:11:34.641 ' 00:11:34.641 23:53:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut 00:11:34.641 23:53:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.641 23:53:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.641 23:53:12 env -- common/autotest_common.sh@10 -- # set +x 00:11:34.641 ************************************ 00:11:34.641 START TEST env_memory 00:11:34.641 ************************************ 00:11:34.641 23:53:12 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut 00:11:34.641 00:11:34.641 00:11:34.641 CUnit - A unit testing framework for C - Version 2.1-3 00:11:34.641 http://cunit.sourceforge.net/ 00:11:34.641 00:11:34.641 00:11:34.641 Suite: memory 00:11:34.641 Test: alloc and free memory map ...[2024-12-09 23:53:12.893636] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:34.641 passed 00:11:34.641 Test: mem map translation ...[2024-12-09 23:53:12.914267] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:34.641 [2024-12-09 23:53:12.914295] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:34.641 [2024-12-09 23:53:12.914337] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:34.641 [2024-12-09 23:53:12.914349] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:34.641 passed 00:11:34.641 Test: mem map registration ...[2024-12-09 23:53:12.957710] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:11:34.641 [2024-12-09 23:53:12.957730] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:11:34.641 passed 00:11:34.641 Test: mem map adjacent registrations ...passed 00:11:34.641 00:11:34.641 Run Summary: Type Total Ran Passed Failed Inactive 00:11:34.641 suites 1 1 n/a 0 0 00:11:34.641 tests 4 4 4 0 0 00:11:34.641 asserts 152 152 152 0 n/a 00:11:34.641 00:11:34.641 Elapsed time = 0.227 seconds 00:11:34.641 00:11:34.641 real 0m0.236s 00:11:34.641 user 0m0.222s 00:11:34.641 sys 0m0.013s 00:11:34.641 23:53:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.641 23:53:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:34.641 ************************************ 00:11:34.641 END TEST env_memory 00:11:34.641 ************************************ 00:11:34.641 23:53:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys 00:11:34.641 23:53:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.641 23:53:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.641 23:53:13 env -- common/autotest_common.sh@10 -- # set +x 00:11:34.900 ************************************ 00:11:34.900 START TEST env_vtophys 00:11:34.900 ************************************ 00:11:34.900 23:53:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys 00:11:34.900 EAL: lib.eal log level changed from notice to debug 00:11:34.900 EAL: Detected lcore 0 as core 0 on socket 0 00:11:34.900 EAL: Detected lcore 1 as core 1 on socket 0 00:11:34.900 EAL: Detected lcore 2 as core 2 on socket 0 00:11:34.900 EAL: Detected lcore 3 as core 3 on socket 0 00:11:34.900 EAL: Detected lcore 4 as core 4 on socket 0 00:11:34.900 EAL: Detected lcore 5 as core 5 on socket 0 00:11:34.900 EAL: Detected lcore 6 as core 8 on socket 0 00:11:34.900 EAL: Detected lcore 7 as core 9 on socket 0 00:11:34.900 EAL: Detected lcore 8 as core 10 on socket 0 00:11:34.900 EAL: Detected lcore 9 as core 11 on socket 0 00:11:34.900 EAL: Detected lcore 10 as core 12 on socket 0 00:11:34.900 EAL: Detected lcore 11 as core 13 on socket 0 00:11:34.900 EAL: Detected lcore 12 as core 0 on socket 1 00:11:34.900 EAL: Detected lcore 13 as core 1 on socket 1 00:11:34.900 EAL: Detected lcore 14 as core 2 on socket 1 00:11:34.900 EAL: Detected lcore 15 as core 3 on socket 1 00:11:34.900 EAL: Detected lcore 16 as core 4 on socket 1 00:11:34.900 EAL: Detected lcore 17 as core 5 on socket 1 00:11:34.900 EAL: Detected lcore 18 as core 8 on socket 1 00:11:34.900 EAL: Detected lcore 19 as core 9 on socket 1 00:11:34.900 EAL: Detected lcore 20 as core 10 on socket 1 00:11:34.900 EAL: Detected lcore 21 as core 11 on socket 1 00:11:34.900 EAL: Detected lcore 22 as core 12 on socket 1 00:11:34.900 EAL: Detected lcore 23 as core 13 on socket 1 00:11:34.900 EAL: Detected lcore 24 as core 0 on socket 0 00:11:34.900 EAL: Detected lcore 25 as core 1 on socket 0 00:11:34.900 EAL: Detected lcore 26 as core 2 on socket 0 00:11:34.900 EAL: Detected lcore 27 as core 3 on socket 0 00:11:34.900 EAL: Detected lcore 28 as core 4 on socket 0 00:11:34.900 EAL: Detected lcore 29 as core 5 on socket 0 00:11:34.900 EAL: Detected lcore 30 as core 8 on socket 0 00:11:34.900 EAL: Detected lcore 31 as core 9 on socket 0 00:11:34.900 EAL: Detected lcore 32 as core 10 on socket 0 00:11:34.900 EAL: Detected lcore 33 as core 11 on socket 0 00:11:34.900 EAL: Detected lcore 34 as core 12 on socket 0 00:11:34.900 EAL: Detected lcore 35 as core 13 on socket 0 00:11:34.900 EAL: Detected lcore 36 as core 0 on socket 1 00:11:34.900 EAL: Detected lcore 37 as core 1 on socket 1 00:11:34.900 EAL: Detected lcore 38 as core 2 on socket 1 00:11:34.900 EAL: Detected lcore 39 as core 3 on socket 1 00:11:34.900 EAL: Detected lcore 40 as core 4 on socket 1 00:11:34.900 EAL: Detected lcore 41 as core 5 on socket 1 00:11:34.900 EAL: Detected lcore 42 as core 8 on socket 1 00:11:34.900 EAL: Detected lcore 43 as core 9 on socket 1 00:11:34.900 EAL: Detected lcore 44 as core 10 on socket 1 00:11:34.900 EAL: Detected lcore 45 as core 11 on socket 1 00:11:34.900 EAL: Detected lcore 46 as core 12 on socket 1 00:11:34.900 EAL: Detected lcore 47 as core 13 on socket 1 00:11:34.900 EAL: Maximum logical cores by configuration: 128 00:11:34.900 EAL: Detected CPU lcores: 48 00:11:34.900 EAL: Detected NUMA nodes: 2 00:11:34.900 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:11:34.900 EAL: Detected shared linkage of DPDK 00:11:34.900 EAL: No shared files mode enabled, IPC will be disabled 00:11:34.900 EAL: Bus pci wants IOVA as 'DC' 00:11:34.900 EAL: Buses did not request a specific IOVA mode. 00:11:34.900 EAL: IOMMU is available, selecting IOVA as VA mode. 00:11:34.900 EAL: Selected IOVA mode 'VA' 00:11:34.900 EAL: Probing VFIO support... 00:11:34.900 EAL: IOMMU type 1 (Type 1) is supported 00:11:34.900 EAL: IOMMU type 7 (sPAPR) is not supported 00:11:34.900 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:11:34.900 EAL: VFIO support initialized 00:11:34.900 EAL: Ask a virtual area of 0x2e000 bytes 00:11:34.900 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:34.900 EAL: Setting up physically contiguous memory... 00:11:34.900 EAL: Setting maximum number of open files to 524288 00:11:34.900 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:34.900 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:11:34.900 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:34.900 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:11:34.900 EAL: Ask a virtual area of 0x61000 bytes 00:11:34.900 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:11:34.900 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:34.900 EAL: Ask a virtual area of 0x400000000 bytes 00:11:34.900 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:11:34.900 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:11:34.901 EAL: Hugepages will be freed exactly as allocated. 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: TSC frequency is ~2700000 KHz 00:11:34.901 EAL: Main lcore 0 is ready (tid=7f3f75c24a00;cpuset=[0]) 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 0 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 2MB 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:34.901 EAL: Mem event callback 'spdk:(nil)' registered 00:11:34.901 00:11:34.901 00:11:34.901 CUnit - A unit testing framework for C - Version 2.1-3 00:11:34.901 http://cunit.sourceforge.net/ 00:11:34.901 00:11:34.901 00:11:34.901 Suite: components_suite 00:11:34.901 Test: vtophys_malloc_test ...passed 00:11:34.901 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 4 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 4MB 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was shrunk by 4MB 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 4 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 6MB 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was shrunk by 6MB 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 4 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 10MB 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was shrunk by 10MB 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 4 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 18MB 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was shrunk by 18MB 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 4 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 34MB 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was shrunk by 34MB 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:34.901 EAL: Restoring previous memory policy: 4 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was expanded by 66MB 00:11:34.901 EAL: Calling mem event callback 'spdk:(nil)' 00:11:34.901 EAL: request: mp_malloc_sync 00:11:34.901 EAL: No shared files mode enabled, IPC is disabled 00:11:34.901 EAL: Heap on socket 0 was shrunk by 66MB 00:11:34.901 EAL: Trying to obtain current memory policy. 00:11:34.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:35.159 EAL: Restoring previous memory policy: 4 00:11:35.159 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.159 EAL: request: mp_malloc_sync 00:11:35.159 EAL: No shared files mode enabled, IPC is disabled 00:11:35.159 EAL: Heap on socket 0 was expanded by 130MB 00:11:35.159 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.159 EAL: request: mp_malloc_sync 00:11:35.159 EAL: No shared files mode enabled, IPC is disabled 00:11:35.159 EAL: Heap on socket 0 was shrunk by 130MB 00:11:35.159 EAL: Trying to obtain current memory policy. 00:11:35.159 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:35.159 EAL: Restoring previous memory policy: 4 00:11:35.159 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.159 EAL: request: mp_malloc_sync 00:11:35.159 EAL: No shared files mode enabled, IPC is disabled 00:11:35.159 EAL: Heap on socket 0 was expanded by 258MB 00:11:35.159 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.416 EAL: request: mp_malloc_sync 00:11:35.416 EAL: No shared files mode enabled, IPC is disabled 00:11:35.416 EAL: Heap on socket 0 was shrunk by 258MB 00:11:35.416 EAL: Trying to obtain current memory policy. 00:11:35.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:35.416 EAL: Restoring previous memory policy: 4 00:11:35.416 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.416 EAL: request: mp_malloc_sync 00:11:35.416 EAL: No shared files mode enabled, IPC is disabled 00:11:35.416 EAL: Heap on socket 0 was expanded by 514MB 00:11:35.674 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.674 EAL: request: mp_malloc_sync 00:11:35.674 EAL: No shared files mode enabled, IPC is disabled 00:11:35.674 EAL: Heap on socket 0 was shrunk by 514MB 00:11:35.674 EAL: Trying to obtain current memory policy. 00:11:35.674 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.241 EAL: Restoring previous memory policy: 4 00:11:36.241 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.241 EAL: request: mp_malloc_sync 00:11:36.241 EAL: No shared files mode enabled, IPC is disabled 00:11:36.241 EAL: Heap on socket 0 was expanded by 1026MB 00:11:36.499 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.499 EAL: request: mp_malloc_sync 00:11:36.499 EAL: No shared files mode enabled, IPC is disabled 00:11:36.499 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:36.499 passed 00:11:36.499 00:11:36.499 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.499 suites 1 1 n/a 0 0 00:11:36.499 tests 2 2 2 0 0 00:11:36.499 asserts 497 497 497 0 n/a 00:11:36.499 00:11:36.499 Elapsed time = 1.648 seconds 00:11:36.499 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.499 EAL: request: mp_malloc_sync 00:11:36.499 EAL: No shared files mode enabled, IPC is disabled 00:11:36.499 EAL: Heap on socket 0 was shrunk by 2MB 00:11:36.499 EAL: No shared files mode enabled, IPC is disabled 00:11:36.499 EAL: No shared files mode enabled, IPC is disabled 00:11:36.499 EAL: No shared files mode enabled, IPC is disabled 00:11:36.499 00:11:36.499 real 0m1.846s 00:11:36.499 user 0m0.905s 00:11:36.499 sys 0m0.887s 00:11:36.499 23:53:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.499 23:53:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:36.499 ************************************ 00:11:36.499 END TEST env_vtophys 00:11:36.499 ************************************ 00:11:36.758 23:53:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut 00:11:36.758 23:53:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:36.758 23:53:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.758 23:53:15 env -- common/autotest_common.sh@10 -- # set +x 00:11:36.758 ************************************ 00:11:36.758 START TEST env_pci 00:11:36.758 ************************************ 00:11:36.758 23:53:15 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut 00:11:36.758 00:11:36.758 00:11:36.758 CUnit - A unit testing framework for C - Version 2.1-3 00:11:36.758 http://cunit.sourceforge.net/ 00:11:36.758 00:11:36.758 00:11:36.758 Suite: pci 00:11:36.758 Test: pci_hook ...[2024-12-09 23:53:15.078202] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 464606 has claimed it 00:11:36.758 EAL: Cannot find device (10000:00:01.0) 00:11:36.758 EAL: Failed to attach device on primary process 00:11:36.758 passed 00:11:36.758 00:11:36.758 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.758 suites 1 1 n/a 0 0 00:11:36.758 tests 1 1 1 0 0 00:11:36.758 asserts 25 25 25 0 n/a 00:11:36.758 00:11:36.758 Elapsed time = 0.023 seconds 00:11:36.758 00:11:36.758 real 0m0.038s 00:11:36.758 user 0m0.012s 00:11:36.758 sys 0m0.025s 00:11:36.758 23:53:15 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.758 23:53:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:36.758 ************************************ 00:11:36.758 END TEST env_pci 00:11:36.758 ************************************ 00:11:36.758 23:53:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:36.758 23:53:15 env -- env/env.sh@15 -- # uname 00:11:36.758 23:53:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:36.758 23:53:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:36.758 23:53:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:36.758 23:53:15 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:36.758 23:53:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.758 23:53:15 env -- common/autotest_common.sh@10 -- # set +x 00:11:36.758 ************************************ 00:11:36.758 START TEST env_dpdk_post_init 00:11:36.758 ************************************ 00:11:36.758 23:53:15 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:36.758 EAL: Detected CPU lcores: 48 00:11:36.758 EAL: Detected NUMA nodes: 2 00:11:36.758 EAL: Detected shared linkage of DPDK 00:11:36.758 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:36.758 EAL: Selected IOVA mode 'VA' 00:11:36.758 EAL: VFIO support initialized 00:11:36.758 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:37.021 EAL: Using IOMMU type 1 (Type 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:11:37.021 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:11:37.279 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:11:37.846 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:11:41.126 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:11:41.126 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:11:41.126 Starting DPDK initialization... 00:11:41.126 Starting SPDK post initialization... 00:11:41.126 SPDK NVMe probe 00:11:41.126 Attaching to 0000:84:00.0 00:11:41.126 Attached to 0000:84:00.0 00:11:41.126 Cleaning up... 00:11:41.126 00:11:41.126 real 0m4.445s 00:11:41.126 user 0m3.033s 00:11:41.126 sys 0m0.470s 00:11:41.126 23:53:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.126 23:53:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:41.126 ************************************ 00:11:41.126 END TEST env_dpdk_post_init 00:11:41.126 ************************************ 00:11:41.126 23:53:19 env -- env/env.sh@26 -- # uname 00:11:41.126 23:53:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:41.126 23:53:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:11:41.126 23:53:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.126 23:53:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.126 23:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:11:41.384 ************************************ 00:11:41.384 START TEST env_mem_callbacks 00:11:41.384 ************************************ 00:11:41.384 23:53:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:11:41.384 EAL: Detected CPU lcores: 48 00:11:41.384 EAL: Detected NUMA nodes: 2 00:11:41.384 EAL: Detected shared linkage of DPDK 00:11:41.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:41.384 EAL: Selected IOVA mode 'VA' 00:11:41.384 EAL: VFIO support initialized 00:11:41.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:41.384 00:11:41.384 00:11:41.384 CUnit - A unit testing framework for C - Version 2.1-3 00:11:41.384 http://cunit.sourceforge.net/ 00:11:41.384 00:11:41.384 00:11:41.384 Suite: memory 00:11:41.384 Test: test ... 00:11:41.384 register 0x200000200000 2097152 00:11:41.384 malloc 3145728 00:11:41.384 register 0x200000400000 4194304 00:11:41.384 buf 0x200000500000 len 3145728 PASSED 00:11:41.384 malloc 64 00:11:41.384 buf 0x2000004fff40 len 64 PASSED 00:11:41.384 malloc 4194304 00:11:41.384 register 0x200000800000 6291456 00:11:41.384 buf 0x200000a00000 len 4194304 PASSED 00:11:41.384 free 0x200000500000 3145728 00:11:41.384 free 0x2000004fff40 64 00:11:41.384 unregister 0x200000400000 4194304 PASSED 00:11:41.384 free 0x200000a00000 4194304 00:11:41.384 unregister 0x200000800000 6291456 PASSED 00:11:41.384 malloc 8388608 00:11:41.384 register 0x200000400000 10485760 00:11:41.384 buf 0x200000600000 len 8388608 PASSED 00:11:41.384 free 0x200000600000 8388608 00:11:41.384 unregister 0x200000400000 10485760 PASSED 00:11:41.384 passed 00:11:41.384 00:11:41.384 Run Summary: Type Total Ran Passed Failed Inactive 00:11:41.384 suites 1 1 n/a 0 0 00:11:41.384 tests 1 1 1 0 0 00:11:41.384 asserts 15 15 15 0 n/a 00:11:41.384 00:11:41.384 Elapsed time = 0.007 seconds 00:11:41.384 00:11:41.384 real 0m0.081s 00:11:41.384 user 0m0.021s 00:11:41.384 sys 0m0.060s 00:11:41.384 23:53:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.384 23:53:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:41.384 ************************************ 00:11:41.384 END TEST env_mem_callbacks 00:11:41.384 ************************************ 00:11:41.384 00:11:41.384 real 0m7.092s 00:11:41.384 user 0m4.404s 00:11:41.384 sys 0m1.709s 00:11:41.384 23:53:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.384 23:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:11:41.384 ************************************ 00:11:41.384 END TEST env 00:11:41.384 ************************************ 00:11:41.384 23:53:19 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh 00:11:41.384 23:53:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.384 23:53:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.384 23:53:19 -- common/autotest_common.sh@10 -- # set +x 00:11:41.384 ************************************ 00:11:41.384 START TEST rpc 00:11:41.384 ************************************ 00:11:41.385 23:53:19 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh 00:11:41.385 * Looking for test storage... 00:11:41.385 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:11:41.385 23:53:19 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.385 23:53:19 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.385 23:53:19 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.644 23:53:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.644 23:53:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.644 23:53:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.644 23:53:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.644 23:53:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.644 23:53:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:41.644 23:53:20 rpc -- scripts/common.sh@345 -- # : 1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.644 23:53:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.644 23:53:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@353 -- # local d=1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.644 23:53:20 rpc -- scripts/common.sh@355 -- # echo 1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.644 23:53:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@353 -- # local d=2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.644 23:53:20 rpc -- scripts/common.sh@355 -- # echo 2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.644 23:53:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.644 23:53:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.644 23:53:20 rpc -- scripts/common.sh@368 -- # return 0 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.644 --rc genhtml_branch_coverage=1 00:11:41.644 --rc genhtml_function_coverage=1 00:11:41.644 --rc genhtml_legend=1 00:11:41.644 --rc geninfo_all_blocks=1 00:11:41.644 --rc geninfo_unexecuted_blocks=1 00:11:41.644 00:11:41.644 ' 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.644 --rc genhtml_branch_coverage=1 00:11:41.644 --rc genhtml_function_coverage=1 00:11:41.644 --rc genhtml_legend=1 00:11:41.644 --rc geninfo_all_blocks=1 00:11:41.644 --rc geninfo_unexecuted_blocks=1 00:11:41.644 00:11:41.644 ' 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.644 --rc genhtml_branch_coverage=1 00:11:41.644 --rc genhtml_function_coverage=1 00:11:41.644 --rc genhtml_legend=1 00:11:41.644 --rc geninfo_all_blocks=1 00:11:41.644 --rc geninfo_unexecuted_blocks=1 00:11:41.644 00:11:41.644 ' 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:41.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.644 --rc genhtml_branch_coverage=1 00:11:41.644 --rc genhtml_function_coverage=1 00:11:41.644 --rc genhtml_legend=1 00:11:41.644 --rc geninfo_all_blocks=1 00:11:41.644 --rc geninfo_unexecuted_blocks=1 00:11:41.644 00:11:41.644 ' 00:11:41.644 23:53:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=465258 00:11:41.644 23:53:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:11:41.644 23:53:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:41.644 23:53:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 465258 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 465258 ']' 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.644 23:53:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.644 [2024-12-09 23:53:20.108071] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:41.644 [2024-12-09 23:53:20.108159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465258 ] 00:11:41.903 [2024-12-09 23:53:20.204241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.903 [2024-12-09 23:53:20.284232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:41.903 [2024-12-09 23:53:20.284305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 465258' to capture a snapshot of events at runtime. 00:11:41.903 [2024-12-09 23:53:20.284319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.903 [2024-12-09 23:53:20.284330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.903 [2024-12-09 23:53:20.284340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid465258 for offline analysis/debug. 00:11:41.903 [2024-12-09 23:53:20.284904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.161 [2024-12-09 23:53:20.651750] 'OCF_Core' volume operations registered 00:11:42.161 [2024-12-09 23:53:20.651861] 'OCF_Cache' volume operations registered 00:11:42.161 [2024-12-09 23:53:20.661088] 'OCF Composite' volume operations registered 00:11:42.161 [2024-12-09 23:53:20.668793] 'SPDK_block_device' volume operations registered 00:11:42.420 23:53:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.420 23:53:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:11:42.420 23:53:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:11:42.420 23:53:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:11:42.420 23:53:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:42.420 23:53:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:42.420 23:53:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.420 23:53:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.420 23:53:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.420 ************************************ 00:11:42.420 START TEST rpc_integrity 00:11:42.420 ************************************ 00:11:42.420 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:42.420 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.420 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.420 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.420 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.420 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:42.420 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:42.678 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:42.678 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:42.678 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.678 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.678 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:42.679 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:42.679 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 23:53:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:42.679 { 00:11:42.679 "name": "Malloc0", 00:11:42.679 "aliases": [ 00:11:42.679 "91ac4827-7301-4c8d-b491-2f315ab8302f" 00:11:42.679 ], 00:11:42.679 "product_name": "Malloc disk", 00:11:42.679 "block_size": 512, 00:11:42.679 "num_blocks": 16384, 00:11:42.679 "uuid": "91ac4827-7301-4c8d-b491-2f315ab8302f", 00:11:42.679 "assigned_rate_limits": { 00:11:42.679 "rw_ios_per_sec": 0, 00:11:42.679 "rw_mbytes_per_sec": 0, 00:11:42.679 "r_mbytes_per_sec": 0, 00:11:42.679 "w_mbytes_per_sec": 0 00:11:42.679 }, 00:11:42.679 "claimed": false, 00:11:42.679 "zoned": false, 00:11:42.679 "supported_io_types": { 00:11:42.679 "read": true, 00:11:42.679 "write": true, 00:11:42.679 "unmap": true, 00:11:42.679 "flush": true, 00:11:42.679 "reset": true, 00:11:42.679 "nvme_admin": false, 00:11:42.679 "nvme_io": false, 00:11:42.679 "nvme_io_md": false, 00:11:42.679 "write_zeroes": true, 00:11:42.679 "zcopy": true, 00:11:42.679 "get_zone_info": false, 00:11:42.679 "zone_management": false, 00:11:42.679 "zone_append": false, 00:11:42.679 "compare": false, 00:11:42.679 "compare_and_write": false, 00:11:42.679 "abort": true, 00:11:42.679 "seek_hole": false, 00:11:42.679 "seek_data": false, 00:11:42.679 "copy": true, 00:11:42.679 "nvme_iov_md": false 00:11:42.679 }, 00:11:42.679 "memory_domains": [ 00:11:42.679 { 00:11:42.679 "dma_device_id": "system", 00:11:42.679 "dma_device_type": 1 00:11:42.679 }, 00:11:42.679 { 00:11:42.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.679 "dma_device_type": 2 00:11:42.679 } 00:11:42.679 ], 00:11:42.679 "driver_specific": {} 00:11:42.679 } 00:11:42.679 ]' 00:11:42.679 23:53:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 [2024-12-09 23:53:21.031172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:42.679 [2024-12-09 23:53:21.031267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.679 [2024-12-09 23:53:21.031330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d46c60 00:11:42.679 [2024-12-09 23:53:21.031343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.679 [2024-12-09 23:53:21.033841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.679 [2024-12-09 23:53:21.033867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:42.679 Passthru0 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:42.679 { 00:11:42.679 "name": "Malloc0", 00:11:42.679 "aliases": [ 00:11:42.679 "91ac4827-7301-4c8d-b491-2f315ab8302f" 00:11:42.679 ], 00:11:42.679 "product_name": "Malloc disk", 00:11:42.679 "block_size": 512, 00:11:42.679 "num_blocks": 16384, 00:11:42.679 "uuid": "91ac4827-7301-4c8d-b491-2f315ab8302f", 00:11:42.679 "assigned_rate_limits": { 00:11:42.679 "rw_ios_per_sec": 0, 00:11:42.679 "rw_mbytes_per_sec": 0, 00:11:42.679 "r_mbytes_per_sec": 0, 00:11:42.679 "w_mbytes_per_sec": 0 00:11:42.679 }, 00:11:42.679 "claimed": true, 00:11:42.679 "claim_type": "exclusive_write", 00:11:42.679 "zoned": false, 00:11:42.679 "supported_io_types": { 00:11:42.679 "read": true, 00:11:42.679 "write": true, 00:11:42.679 "unmap": true, 00:11:42.679 "flush": true, 00:11:42.679 "reset": true, 00:11:42.679 "nvme_admin": false, 00:11:42.679 "nvme_io": false, 00:11:42.679 "nvme_io_md": false, 00:11:42.679 "write_zeroes": true, 00:11:42.679 "zcopy": true, 00:11:42.679 "get_zone_info": false, 00:11:42.679 "zone_management": false, 00:11:42.679 "zone_append": false, 00:11:42.679 "compare": false, 00:11:42.679 "compare_and_write": false, 00:11:42.679 "abort": true, 00:11:42.679 "seek_hole": false, 00:11:42.679 "seek_data": false, 00:11:42.679 "copy": true, 00:11:42.679 "nvme_iov_md": false 00:11:42.679 }, 00:11:42.679 "memory_domains": [ 00:11:42.679 { 00:11:42.679 "dma_device_id": "system", 00:11:42.679 "dma_device_type": 1 00:11:42.679 }, 00:11:42.679 { 00:11:42.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.679 "dma_device_type": 2 00:11:42.679 } 00:11:42.679 ], 00:11:42.679 "driver_specific": {} 00:11:42.679 }, 00:11:42.679 { 00:11:42.679 "name": "Passthru0", 00:11:42.679 "aliases": [ 00:11:42.679 "2db416d1-2e10-5172-b07f-22f04aa3ba40" 00:11:42.679 ], 00:11:42.679 "product_name": "passthru", 00:11:42.679 "block_size": 512, 00:11:42.679 "num_blocks": 16384, 00:11:42.679 "uuid": "2db416d1-2e10-5172-b07f-22f04aa3ba40", 00:11:42.679 "assigned_rate_limits": { 00:11:42.679 "rw_ios_per_sec": 0, 00:11:42.679 "rw_mbytes_per_sec": 0, 00:11:42.679 "r_mbytes_per_sec": 0, 00:11:42.679 "w_mbytes_per_sec": 0 00:11:42.679 }, 00:11:42.679 "claimed": false, 00:11:42.679 "zoned": false, 00:11:42.679 "supported_io_types": { 00:11:42.679 "read": true, 00:11:42.679 "write": true, 00:11:42.679 "unmap": true, 00:11:42.679 "flush": true, 00:11:42.679 "reset": true, 00:11:42.679 "nvme_admin": false, 00:11:42.679 "nvme_io": false, 00:11:42.679 "nvme_io_md": false, 00:11:42.679 "write_zeroes": true, 00:11:42.679 "zcopy": true, 00:11:42.679 "get_zone_info": false, 00:11:42.679 "zone_management": false, 00:11:42.679 "zone_append": false, 00:11:42.679 "compare": false, 00:11:42.679 "compare_and_write": false, 00:11:42.679 "abort": true, 00:11:42.679 "seek_hole": false, 00:11:42.679 "seek_data": false, 00:11:42.679 "copy": true, 00:11:42.679 "nvme_iov_md": false 00:11:42.679 }, 00:11:42.679 "memory_domains": [ 00:11:42.679 { 00:11:42.679 "dma_device_id": "system", 00:11:42.679 "dma_device_type": 1 00:11:42.679 }, 00:11:42.679 { 00:11:42.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.679 "dma_device_type": 2 00:11:42.679 } 00:11:42.679 ], 00:11:42.679 "driver_specific": { 00:11:42.679 "passthru": { 00:11:42.679 "name": "Passthru0", 00:11:42.679 "base_bdev_name": "Malloc0" 00:11:42.679 } 00:11:42.679 } 00:11:42.679 } 00:11:42.679 ]' 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:42.679 23:53:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:42.679 00:11:42.679 real 0m0.274s 00:11:42.679 user 0m0.191s 00:11:42.679 sys 0m0.022s 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.679 23:53:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:42.679 ************************************ 00:11:42.679 END TEST rpc_integrity 00:11:42.679 ************************************ 00:11:42.938 23:53:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:42.938 23:53:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.938 23:53:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.938 23:53:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.938 ************************************ 00:11:42.938 START TEST rpc_plugins 00:11:42.938 ************************************ 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:42.938 { 00:11:42.938 "name": "Malloc1", 00:11:42.938 "aliases": [ 00:11:42.938 "39b68a0f-8e6b-4a5d-9c8c-4d793966ccc7" 00:11:42.938 ], 00:11:42.938 "product_name": "Malloc disk", 00:11:42.938 "block_size": 4096, 00:11:42.938 "num_blocks": 256, 00:11:42.938 "uuid": "39b68a0f-8e6b-4a5d-9c8c-4d793966ccc7", 00:11:42.938 "assigned_rate_limits": { 00:11:42.938 "rw_ios_per_sec": 0, 00:11:42.938 "rw_mbytes_per_sec": 0, 00:11:42.938 "r_mbytes_per_sec": 0, 00:11:42.938 "w_mbytes_per_sec": 0 00:11:42.938 }, 00:11:42.938 "claimed": false, 00:11:42.938 "zoned": false, 00:11:42.938 "supported_io_types": { 00:11:42.938 "read": true, 00:11:42.938 "write": true, 00:11:42.938 "unmap": true, 00:11:42.938 "flush": true, 00:11:42.938 "reset": true, 00:11:42.938 "nvme_admin": false, 00:11:42.938 "nvme_io": false, 00:11:42.938 "nvme_io_md": false, 00:11:42.938 "write_zeroes": true, 00:11:42.938 "zcopy": true, 00:11:42.938 "get_zone_info": false, 00:11:42.938 "zone_management": false, 00:11:42.938 "zone_append": false, 00:11:42.938 "compare": false, 00:11:42.938 "compare_and_write": false, 00:11:42.938 "abort": true, 00:11:42.938 "seek_hole": false, 00:11:42.938 "seek_data": false, 00:11:42.938 "copy": true, 00:11:42.938 "nvme_iov_md": false 00:11:42.938 }, 00:11:42.938 "memory_domains": [ 00:11:42.938 { 00:11:42.938 "dma_device_id": "system", 00:11:42.938 "dma_device_type": 1 00:11:42.938 }, 00:11:42.938 { 00:11:42.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.938 "dma_device_type": 2 00:11:42.938 } 00:11:42.938 ], 00:11:42.938 "driver_specific": {} 00:11:42.938 } 00:11:42.938 ]' 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:42.938 23:53:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:42.938 00:11:42.938 real 0m0.169s 00:11:42.938 user 0m0.118s 00:11:42.938 sys 0m0.014s 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.938 23:53:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:42.938 ************************************ 00:11:42.938 END TEST rpc_plugins 00:11:42.938 ************************************ 00:11:42.938 23:53:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:42.938 23:53:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.938 23:53:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.938 23:53:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.196 ************************************ 00:11:43.196 START TEST rpc_trace_cmd_test 00:11:43.196 ************************************ 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:43.196 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid465258", 00:11:43.196 "tpoint_group_mask": "0x8", 00:11:43.196 "iscsi_conn": { 00:11:43.196 "mask": "0x2", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "scsi": { 00:11:43.196 "mask": "0x4", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "bdev": { 00:11:43.196 "mask": "0x8", 00:11:43.196 "tpoint_mask": "0xffffffffffffffff" 00:11:43.196 }, 00:11:43.196 "nvmf_rdma": { 00:11:43.196 "mask": "0x10", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "nvmf_tcp": { 00:11:43.196 "mask": "0x20", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "ftl": { 00:11:43.196 "mask": "0x40", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "blobfs": { 00:11:43.196 "mask": "0x80", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "dsa": { 00:11:43.196 "mask": "0x200", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "thread": { 00:11:43.196 "mask": "0x400", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "nvme_pcie": { 00:11:43.196 "mask": "0x800", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "iaa": { 00:11:43.196 "mask": "0x1000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "nvme_tcp": { 00:11:43.196 "mask": "0x2000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "bdev_nvme": { 00:11:43.196 "mask": "0x4000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "sock": { 00:11:43.196 "mask": "0x8000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "blob": { 00:11:43.196 "mask": "0x10000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "bdev_raid": { 00:11:43.196 "mask": "0x20000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 }, 00:11:43.196 "scheduler": { 00:11:43.196 "mask": "0x40000", 00:11:43.196 "tpoint_mask": "0x0" 00:11:43.196 } 00:11:43.196 }' 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:43.196 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:43.454 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:43.454 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:43.454 23:53:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:43.454 00:11:43.454 real 0m0.298s 00:11:43.454 user 0m0.264s 00:11:43.454 sys 0m0.027s 00:11:43.454 23:53:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.454 23:53:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 ************************************ 00:11:43.454 END TEST rpc_trace_cmd_test 00:11:43.454 ************************************ 00:11:43.454 23:53:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:43.454 23:53:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:43.454 23:53:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:43.454 23:53:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.454 23:53:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.454 23:53:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 ************************************ 00:11:43.454 START TEST rpc_daemon_integrity 00:11:43.454 ************************************ 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.454 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:43.454 { 00:11:43.454 "name": "Malloc2", 00:11:43.454 "aliases": [ 00:11:43.455 "5ffc4983-6c05-4135-b35e-12a9916570a4" 00:11:43.455 ], 00:11:43.455 "product_name": "Malloc disk", 00:11:43.455 "block_size": 512, 00:11:43.455 "num_blocks": 16384, 00:11:43.455 "uuid": "5ffc4983-6c05-4135-b35e-12a9916570a4", 00:11:43.455 "assigned_rate_limits": { 00:11:43.455 "rw_ios_per_sec": 0, 00:11:43.455 "rw_mbytes_per_sec": 0, 00:11:43.455 "r_mbytes_per_sec": 0, 00:11:43.455 "w_mbytes_per_sec": 0 00:11:43.455 }, 00:11:43.455 "claimed": false, 00:11:43.455 "zoned": false, 00:11:43.455 "supported_io_types": { 00:11:43.455 "read": true, 00:11:43.455 "write": true, 00:11:43.455 "unmap": true, 00:11:43.455 "flush": true, 00:11:43.455 "reset": true, 00:11:43.455 "nvme_admin": false, 00:11:43.455 "nvme_io": false, 00:11:43.455 "nvme_io_md": false, 00:11:43.455 "write_zeroes": true, 00:11:43.455 "zcopy": true, 00:11:43.455 "get_zone_info": false, 00:11:43.455 "zone_management": false, 00:11:43.455 "zone_append": false, 00:11:43.455 "compare": false, 00:11:43.455 "compare_and_write": false, 00:11:43.455 "abort": true, 00:11:43.455 "seek_hole": false, 00:11:43.455 "seek_data": false, 00:11:43.455 "copy": true, 00:11:43.455 "nvme_iov_md": false 00:11:43.455 }, 00:11:43.455 "memory_domains": [ 00:11:43.455 { 00:11:43.455 "dma_device_id": "system", 00:11:43.455 "dma_device_type": 1 00:11:43.455 }, 00:11:43.455 { 00:11:43.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.455 "dma_device_type": 2 00:11:43.455 } 00:11:43.455 ], 00:11:43.455 "driver_specific": {} 00:11:43.455 } 00:11:43.455 ]' 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.455 [2024-12-09 23:53:21.957155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:43.455 [2024-12-09 23:53:21.957248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.455 [2024-12-09 23:53:21.957308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e8a6a0 00:11:43.455 [2024-12-09 23:53:21.957322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.455 [2024-12-09 23:53:21.959108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.455 [2024-12-09 23:53:21.959144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:43.455 Passthru0 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.455 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.713 23:53:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.713 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:43.713 { 00:11:43.713 "name": "Malloc2", 00:11:43.713 "aliases": [ 00:11:43.713 "5ffc4983-6c05-4135-b35e-12a9916570a4" 00:11:43.713 ], 00:11:43.713 "product_name": "Malloc disk", 00:11:43.713 "block_size": 512, 00:11:43.713 "num_blocks": 16384, 00:11:43.713 "uuid": "5ffc4983-6c05-4135-b35e-12a9916570a4", 00:11:43.713 "assigned_rate_limits": { 00:11:43.713 "rw_ios_per_sec": 0, 00:11:43.713 "rw_mbytes_per_sec": 0, 00:11:43.713 "r_mbytes_per_sec": 0, 00:11:43.713 "w_mbytes_per_sec": 0 00:11:43.713 }, 00:11:43.713 "claimed": true, 00:11:43.713 "claim_type": "exclusive_write", 00:11:43.713 "zoned": false, 00:11:43.713 "supported_io_types": { 00:11:43.713 "read": true, 00:11:43.713 "write": true, 00:11:43.713 "unmap": true, 00:11:43.713 "flush": true, 00:11:43.713 "reset": true, 00:11:43.713 "nvme_admin": false, 00:11:43.713 "nvme_io": false, 00:11:43.713 "nvme_io_md": false, 00:11:43.713 "write_zeroes": true, 00:11:43.713 "zcopy": true, 00:11:43.713 "get_zone_info": false, 00:11:43.713 "zone_management": false, 00:11:43.713 "zone_append": false, 00:11:43.713 "compare": false, 00:11:43.713 "compare_and_write": false, 00:11:43.713 "abort": true, 00:11:43.713 "seek_hole": false, 00:11:43.713 "seek_data": false, 00:11:43.713 "copy": true, 00:11:43.713 "nvme_iov_md": false 00:11:43.713 }, 00:11:43.713 "memory_domains": [ 00:11:43.713 { 00:11:43.713 "dma_device_id": "system", 00:11:43.713 "dma_device_type": 1 00:11:43.713 }, 00:11:43.713 { 00:11:43.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.713 "dma_device_type": 2 00:11:43.713 } 00:11:43.713 ], 00:11:43.713 "driver_specific": {} 00:11:43.713 }, 00:11:43.713 { 00:11:43.713 "name": "Passthru0", 00:11:43.713 "aliases": [ 00:11:43.713 "0299adff-8513-5a7b-ab25-a6560d38a7b3" 00:11:43.713 ], 00:11:43.713 "product_name": "passthru", 00:11:43.713 "block_size": 512, 00:11:43.713 "num_blocks": 16384, 00:11:43.713 "uuid": "0299adff-8513-5a7b-ab25-a6560d38a7b3", 00:11:43.713 "assigned_rate_limits": { 00:11:43.713 "rw_ios_per_sec": 0, 00:11:43.713 "rw_mbytes_per_sec": 0, 00:11:43.713 "r_mbytes_per_sec": 0, 00:11:43.713 "w_mbytes_per_sec": 0 00:11:43.713 }, 00:11:43.713 "claimed": false, 00:11:43.713 "zoned": false, 00:11:43.713 "supported_io_types": { 00:11:43.713 "read": true, 00:11:43.713 "write": true, 00:11:43.713 "unmap": true, 00:11:43.713 "flush": true, 00:11:43.713 "reset": true, 00:11:43.713 "nvme_admin": false, 00:11:43.713 "nvme_io": false, 00:11:43.713 "nvme_io_md": false, 00:11:43.713 "write_zeroes": true, 00:11:43.713 "zcopy": true, 00:11:43.713 "get_zone_info": false, 00:11:43.713 "zone_management": false, 00:11:43.713 "zone_append": false, 00:11:43.713 "compare": false, 00:11:43.713 "compare_and_write": false, 00:11:43.713 "abort": true, 00:11:43.713 "seek_hole": false, 00:11:43.713 "seek_data": false, 00:11:43.713 "copy": true, 00:11:43.713 "nvme_iov_md": false 00:11:43.713 }, 00:11:43.713 "memory_domains": [ 00:11:43.713 { 00:11:43.713 "dma_device_id": "system", 00:11:43.713 "dma_device_type": 1 00:11:43.713 }, 00:11:43.713 { 00:11:43.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.713 "dma_device_type": 2 00:11:43.713 } 00:11:43.713 ], 00:11:43.713 "driver_specific": { 00:11:43.713 "passthru": { 00:11:43.713 "name": "Passthru0", 00:11:43.713 "base_bdev_name": "Malloc2" 00:11:43.713 } 00:11:43.713 } 00:11:43.713 } 00:11:43.713 ]' 00:11:43.713 23:53:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:43.713 00:11:43.713 real 0m0.273s 00:11:43.713 user 0m0.191s 00:11:43.713 sys 0m0.020s 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.713 23:53:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:43.713 ************************************ 00:11:43.713 END TEST rpc_daemon_integrity 00:11:43.713 ************************************ 00:11:43.713 23:53:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:43.713 23:53:22 rpc -- rpc/rpc.sh@84 -- # killprocess 465258 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 465258 ']' 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@958 -- # kill -0 465258 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@959 -- # uname 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465258 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465258' 00:11:43.713 killing process with pid 465258 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@973 -- # kill 465258 00:11:43.713 23:53:22 rpc -- common/autotest_common.sh@978 -- # wait 465258 00:11:44.648 00:11:44.648 real 0m3.033s 00:11:44.648 user 0m3.539s 00:11:44.648 sys 0m0.960s 00:11:44.648 23:53:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.648 23:53:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.648 ************************************ 00:11:44.648 END TEST rpc 00:11:44.648 ************************************ 00:11:44.648 23:53:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:11:44.648 23:53:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.649 23:53:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.649 23:53:22 -- common/autotest_common.sh@10 -- # set +x 00:11:44.649 ************************************ 00:11:44.649 START TEST skip_rpc 00:11:44.649 ************************************ 00:11:44.649 23:53:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:11:44.649 * Looking for test storage... 00:11:44.649 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:11:44.649 23:53:22 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.649 23:53:22 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.649 23:53:22 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.649 23:53:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.649 --rc genhtml_branch_coverage=1 00:11:44.649 --rc genhtml_function_coverage=1 00:11:44.649 --rc genhtml_legend=1 00:11:44.649 --rc geninfo_all_blocks=1 00:11:44.649 --rc geninfo_unexecuted_blocks=1 00:11:44.649 00:11:44.649 ' 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.649 --rc genhtml_branch_coverage=1 00:11:44.649 --rc genhtml_function_coverage=1 00:11:44.649 --rc genhtml_legend=1 00:11:44.649 --rc geninfo_all_blocks=1 00:11:44.649 --rc geninfo_unexecuted_blocks=1 00:11:44.649 00:11:44.649 ' 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.649 --rc genhtml_branch_coverage=1 00:11:44.649 --rc genhtml_function_coverage=1 00:11:44.649 --rc genhtml_legend=1 00:11:44.649 --rc geninfo_all_blocks=1 00:11:44.649 --rc geninfo_unexecuted_blocks=1 00:11:44.649 00:11:44.649 ' 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.649 --rc genhtml_branch_coverage=1 00:11:44.649 --rc genhtml_function_coverage=1 00:11:44.649 --rc genhtml_legend=1 00:11:44.649 --rc geninfo_all_blocks=1 00:11:44.649 --rc geninfo_unexecuted_blocks=1 00:11:44.649 00:11:44.649 ' 00:11:44.649 23:53:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:11:44.649 23:53:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/log.txt 00:11:44.649 23:53:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.649 23:53:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.649 ************************************ 00:11:44.649 START TEST skip_rpc 00:11:44.649 ************************************ 00:11:44.649 23:53:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:11:44.649 23:53:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=465829 00:11:44.649 23:53:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:44.649 23:53:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:44.649 23:53:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:44.907 [2024-12-09 23:53:23.218831] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:44.908 [2024-12-09 23:53:23.218920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465829 ] 00:11:44.908 [2024-12-09 23:53:23.354077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.166 [2024-12-09 23:53:23.456268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.424 [2024-12-09 23:53:23.840950] 'OCF_Core' volume operations registered 00:11:45.425 [2024-12-09 23:53:23.841045] 'OCF_Cache' volume operations registered 00:11:45.425 [2024-12-09 23:53:23.850280] 'OCF Composite' volume operations registered 00:11:45.425 [2024-12-09 23:53:23.859289] 'SPDK_block_device' volume operations registered 00:11:50.688 23:53:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:50.688 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:50.688 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:50.688 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 465829 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 465829 ']' 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 465829 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465829 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465829' 00:11:50.689 killing process with pid 465829 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 465829 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 465829 00:11:50.689 00:11:50.689 real 0m5.833s 00:11:50.689 user 0m5.135s 00:11:50.689 sys 0m0.687s 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.689 23:53:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.689 ************************************ 00:11:50.689 END TEST skip_rpc 00:11:50.689 ************************************ 00:11:50.689 23:53:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:50.689 23:53:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.689 23:53:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.689 23:53:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.689 ************************************ 00:11:50.689 START TEST skip_rpc_with_json 00:11:50.689 ************************************ 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=466512 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 466512 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 466512 ']' 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.689 23:53:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:50.689 [2024-12-09 23:53:29.142492] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:50.689 [2024-12-09 23:53:29.142610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466512 ] 00:11:50.947 [2024-12-09 23:53:29.260203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.947 [2024-12-09 23:53:29.362021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.514 [2024-12-09 23:53:29.763953] 'OCF_Core' volume operations registered 00:11:51.514 [2024-12-09 23:53:29.764012] 'OCF_Cache' volume operations registered 00:11:51.514 [2024-12-09 23:53:29.772121] 'OCF Composite' volume operations registered 00:11:51.514 [2024-12-09 23:53:29.780371] 'SPDK_block_device' volume operations registered 00:11:51.514 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.514 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:51.514 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:51.514 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.514 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:51.772 [2024-12-09 23:53:30.037479] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:51.772 request: 00:11:51.772 { 00:11:51.772 "trtype": "tcp", 00:11:51.772 "method": "nvmf_get_transports", 00:11:51.772 "req_id": 1 00:11:51.772 } 00:11:51.772 Got JSON-RPC error response 00:11:51.772 response: 00:11:51.772 { 00:11:51.772 "code": -19, 00:11:51.772 "message": "No such device" 00:11:51.772 } 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:51.772 [2024-12-09 23:53:30.045513] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.772 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:11:51.772 { 00:11:51.772 "subsystems": [ 00:11:51.772 { 00:11:51.772 "subsystem": "fsdev", 00:11:51.772 "config": [ 00:11:51.772 { 00:11:51.772 "method": "fsdev_set_opts", 00:11:51.772 "params": { 00:11:51.772 "fsdev_io_pool_size": 65535, 00:11:51.772 "fsdev_io_cache_size": 256 00:11:51.772 } 00:11:51.772 } 00:11:51.772 ] 00:11:51.772 }, 00:11:51.772 { 00:11:51.772 "subsystem": "keyring", 00:11:51.772 "config": [] 00:11:51.772 }, 00:11:51.772 { 00:11:51.772 "subsystem": "iobuf", 00:11:51.772 "config": [ 00:11:51.772 { 00:11:51.772 "method": "iobuf_set_options", 00:11:51.772 "params": { 00:11:51.772 "small_pool_count": 8192, 00:11:51.772 "large_pool_count": 1024, 00:11:51.772 "small_bufsize": 8192, 00:11:51.772 "large_bufsize": 135168, 00:11:51.772 "enable_numa": false 00:11:51.772 } 00:11:51.772 } 00:11:51.772 ] 00:11:51.772 }, 00:11:51.772 { 00:11:51.772 "subsystem": "sock", 00:11:51.772 "config": [ 00:11:51.772 { 00:11:51.772 "method": "sock_set_default_impl", 00:11:51.772 "params": { 00:11:51.772 "impl_name": "posix" 00:11:51.772 } 00:11:51.772 }, 00:11:51.772 { 00:11:51.772 "method": "sock_impl_set_options", 00:11:51.772 "params": { 00:11:51.772 "impl_name": "ssl", 00:11:51.772 "recv_buf_size": 4096, 00:11:51.772 "send_buf_size": 4096, 00:11:51.772 "enable_recv_pipe": true, 00:11:51.772 "enable_quickack": false, 00:11:51.772 "enable_placement_id": 0, 00:11:51.773 "enable_zerocopy_send_server": true, 00:11:51.773 "enable_zerocopy_send_client": false, 00:11:51.773 "zerocopy_threshold": 0, 00:11:51.773 "tls_version": 0, 00:11:51.773 "enable_ktls": false 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "sock_impl_set_options", 00:11:51.773 "params": { 00:11:51.773 "impl_name": "posix", 00:11:51.773 "recv_buf_size": 2097152, 00:11:51.773 "send_buf_size": 2097152, 00:11:51.773 "enable_recv_pipe": true, 00:11:51.773 "enable_quickack": false, 00:11:51.773 "enable_placement_id": 0, 00:11:51.773 "enable_zerocopy_send_server": true, 00:11:51.773 "enable_zerocopy_send_client": false, 00:11:51.773 "zerocopy_threshold": 0, 00:11:51.773 "tls_version": 0, 00:11:51.773 "enable_ktls": false 00:11:51.773 } 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "vmd", 00:11:51.773 "config": [] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "accel", 00:11:51.773 "config": [ 00:11:51.773 { 00:11:51.773 "method": "accel_set_options", 00:11:51.773 "params": { 00:11:51.773 "small_cache_size": 128, 00:11:51.773 "large_cache_size": 16, 00:11:51.773 "task_count": 2048, 00:11:51.773 "sequence_count": 2048, 00:11:51.773 "buf_count": 2048 00:11:51.773 } 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "bdev", 00:11:51.773 "config": [ 00:11:51.773 { 00:11:51.773 "method": "bdev_set_options", 00:11:51.773 "params": { 00:11:51.773 "bdev_io_pool_size": 65535, 00:11:51.773 "bdev_io_cache_size": 256, 00:11:51.773 "bdev_auto_examine": true, 00:11:51.773 "iobuf_small_cache_size": 128, 00:11:51.773 "iobuf_large_cache_size": 16 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "bdev_raid_set_options", 00:11:51.773 "params": { 00:11:51.773 "process_window_size_kb": 1024, 00:11:51.773 "process_max_bandwidth_mb_sec": 0 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "bdev_iscsi_set_options", 00:11:51.773 "params": { 00:11:51.773 "timeout_sec": 30 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "bdev_nvme_set_options", 00:11:51.773 "params": { 00:11:51.773 "action_on_timeout": "none", 00:11:51.773 "timeout_us": 0, 00:11:51.773 "timeout_admin_us": 0, 00:11:51.773 "keep_alive_timeout_ms": 10000, 00:11:51.773 "arbitration_burst": 0, 00:11:51.773 "low_priority_weight": 0, 00:11:51.773 "medium_priority_weight": 0, 00:11:51.773 "high_priority_weight": 0, 00:11:51.773 "nvme_adminq_poll_period_us": 10000, 00:11:51.773 "nvme_ioq_poll_period_us": 0, 00:11:51.773 "io_queue_requests": 0, 00:11:51.773 "delay_cmd_submit": true, 00:11:51.773 "transport_retry_count": 4, 00:11:51.773 "bdev_retry_count": 3, 00:11:51.773 "transport_ack_timeout": 0, 00:11:51.773 "ctrlr_loss_timeout_sec": 0, 00:11:51.773 "reconnect_delay_sec": 0, 00:11:51.773 "fast_io_fail_timeout_sec": 0, 00:11:51.773 "disable_auto_failback": false, 00:11:51.773 "generate_uuids": false, 00:11:51.773 "transport_tos": 0, 00:11:51.773 "nvme_error_stat": false, 00:11:51.773 "rdma_srq_size": 0, 00:11:51.773 "io_path_stat": false, 00:11:51.773 "allow_accel_sequence": false, 00:11:51.773 "rdma_max_cq_size": 0, 00:11:51.773 "rdma_cm_event_timeout_ms": 0, 00:11:51.773 "dhchap_digests": [ 00:11:51.773 "sha256", 00:11:51.773 "sha384", 00:11:51.773 "sha512" 00:11:51.773 ], 00:11:51.773 "dhchap_dhgroups": [ 00:11:51.773 "null", 00:11:51.773 "ffdhe2048", 00:11:51.773 "ffdhe3072", 00:11:51.773 "ffdhe4096", 00:11:51.773 "ffdhe6144", 00:11:51.773 "ffdhe8192" 00:11:51.773 ] 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "bdev_nvme_set_hotplug", 00:11:51.773 "params": { 00:11:51.773 "period_us": 100000, 00:11:51.773 "enable": false 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "bdev_wait_for_examine" 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "scsi", 00:11:51.773 "config": null 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "scheduler", 00:11:51.773 "config": [ 00:11:51.773 { 00:11:51.773 "method": "framework_set_scheduler", 00:11:51.773 "params": { 00:11:51.773 "name": "static" 00:11:51.773 } 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "vhost_scsi", 00:11:51.773 "config": [] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "vhost_blk", 00:11:51.773 "config": [] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "ublk", 00:11:51.773 "config": [] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "nbd", 00:11:51.773 "config": [] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "nvmf", 00:11:51.773 "config": [ 00:11:51.773 { 00:11:51.773 "method": "nvmf_set_config", 00:11:51.773 "params": { 00:11:51.773 "discovery_filter": "match_any", 00:11:51.773 "admin_cmd_passthru": { 00:11:51.773 "identify_ctrlr": false 00:11:51.773 }, 00:11:51.773 "dhchap_digests": [ 00:11:51.773 "sha256", 00:11:51.773 "sha384", 00:11:51.773 "sha512" 00:11:51.773 ], 00:11:51.773 "dhchap_dhgroups": [ 00:11:51.773 "null", 00:11:51.773 "ffdhe2048", 00:11:51.773 "ffdhe3072", 00:11:51.773 "ffdhe4096", 00:11:51.773 "ffdhe6144", 00:11:51.773 "ffdhe8192" 00:11:51.773 ] 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "nvmf_set_max_subsystems", 00:11:51.773 "params": { 00:11:51.773 "max_subsystems": 1024 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "nvmf_set_crdt", 00:11:51.773 "params": { 00:11:51.773 "crdt1": 0, 00:11:51.773 "crdt2": 0, 00:11:51.773 "crdt3": 0 00:11:51.773 } 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "method": "nvmf_create_transport", 00:11:51.773 "params": { 00:11:51.773 "trtype": "TCP", 00:11:51.773 "max_queue_depth": 128, 00:11:51.773 "max_io_qpairs_per_ctrlr": 127, 00:11:51.773 "in_capsule_data_size": 4096, 00:11:51.773 "max_io_size": 131072, 00:11:51.773 "io_unit_size": 131072, 00:11:51.773 "max_aq_depth": 128, 00:11:51.773 "num_shared_buffers": 511, 00:11:51.773 "buf_cache_size": 4294967295, 00:11:51.773 "dif_insert_or_strip": false, 00:11:51.773 "zcopy": false, 00:11:51.773 "c2h_success": true, 00:11:51.773 "sock_priority": 0, 00:11:51.773 "abort_timeout_sec": 1, 00:11:51.773 "ack_timeout": 0, 00:11:51.773 "data_wr_pool_size": 0 00:11:51.773 } 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 }, 00:11:51.773 { 00:11:51.773 "subsystem": "iscsi", 00:11:51.773 "config": [ 00:11:51.773 { 00:11:51.773 "method": "iscsi_set_options", 00:11:51.773 "params": { 00:11:51.773 "node_base": "iqn.2016-06.io.spdk", 00:11:51.773 "max_sessions": 128, 00:11:51.773 "max_connections_per_session": 2, 00:11:51.773 "max_queue_depth": 64, 00:11:51.773 "default_time2wait": 2, 00:11:51.773 "default_time2retain": 20, 00:11:51.773 "first_burst_length": 8192, 00:11:51.773 "immediate_data": true, 00:11:51.773 "allow_duplicated_isid": false, 00:11:51.773 "error_recovery_level": 0, 00:11:51.773 "nop_timeout": 60, 00:11:51.773 "nop_in_interval": 30, 00:11:51.773 "disable_chap": false, 00:11:51.773 "require_chap": false, 00:11:51.773 "mutual_chap": false, 00:11:51.773 "chap_group": 0, 00:11:51.773 "max_large_datain_per_connection": 64, 00:11:51.773 "max_r2t_per_connection": 4, 00:11:51.773 "pdu_pool_size": 36864, 00:11:51.773 "immediate_data_pool_size": 16384, 00:11:51.773 "data_out_pool_size": 2048 00:11:51.773 } 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 } 00:11:51.773 ] 00:11:51.773 } 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 466512 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 466512 ']' 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 466512 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466512 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466512' 00:11:51.773 killing process with pid 466512 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 466512 00:11:51.773 23:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 466512 00:11:52.707 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=466662 00:11:52.707 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:11:52.707 23:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:57.970 23:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 466662 00:11:57.970 23:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 466662 ']' 00:11:57.970 23:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 466662 00:11:57.970 23:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:57.970 23:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.970 23:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466662 00:11:57.970 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.970 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.970 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466662' 00:11:57.970 killing process with pid 466662 00:11:57.970 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 466662 00:11:57.970 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 466662 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/log.txt 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/log.txt 00:11:58.536 00:11:58.536 real 0m7.771s 00:11:58.536 user 0m6.890s 00:11:58.536 sys 0m1.474s 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:58.536 ************************************ 00:11:58.536 END TEST skip_rpc_with_json 00:11:58.536 ************************************ 00:11:58.536 23:53:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:58.536 23:53:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.536 23:53:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.536 23:53:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.536 ************************************ 00:11:58.536 START TEST skip_rpc_with_delay 00:11:58.536 ************************************ 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:58.536 [2024-12-09 23:53:36.933191] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.536 00:11:58.536 real 0m0.083s 00:11:58.536 user 0m0.054s 00:11:58.536 sys 0m0.028s 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.536 23:53:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:58.536 ************************************ 00:11:58.536 END TEST skip_rpc_with_delay 00:11:58.536 ************************************ 00:11:58.536 23:53:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:58.536 23:53:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:58.536 23:53:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:58.536 23:53:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.536 23:53:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.536 23:53:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.536 ************************************ 00:11:58.536 START TEST exit_on_failed_rpc_init 00:11:58.536 ************************************ 00:11:58.536 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:58.536 23:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=467481 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 467481 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 467481 ']' 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.537 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:58.795 [2024-12-09 23:53:37.083270] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:58.795 [2024-12-09 23:53:37.083378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467481 ] 00:11:58.795 [2024-12-09 23:53:37.203255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.795 [2024-12-09 23:53:37.314428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.362 [2024-12-09 23:53:37.714096] 'OCF_Core' volume operations registered 00:11:59.362 [2024-12-09 23:53:37.714195] 'OCF_Cache' volume operations registered 00:11:59.362 [2024-12-09 23:53:37.723394] 'OCF Composite' volume operations registered 00:11:59.362 [2024-12-09 23:53:37.732288] 'SPDK_block_device' volume operations registered 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:11:59.622 23:53:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:11:59.622 [2024-12-09 23:53:38.110179] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:11:59.622 [2024-12-09 23:53:38.110346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467506 ] 00:11:59.880 [2024-12-09 23:53:38.210807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.880 [2024-12-09 23:53:38.268063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.880 [2024-12-09 23:53:38.268198] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:59.880 [2024-12-09 23:53:38.268217] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:59.880 [2024-12-09 23:53:38.268228] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 467481 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 467481 ']' 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 467481 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467481 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467481' 00:11:59.880 killing process with pid 467481 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 467481 00:11:59.880 23:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 467481 00:12:00.813 00:12:00.813 real 0m2.110s 00:12:00.813 user 0m2.144s 00:12:00.813 sys 0m0.861s 00:12:00.813 23:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.813 23:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:00.813 ************************************ 00:12:00.813 END TEST exit_on_failed_rpc_init 00:12:00.813 ************************************ 00:12:00.813 23:53:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/config.json 00:12:00.813 00:12:00.813 real 0m16.237s 00:12:00.813 user 0m14.448s 00:12:00.813 sys 0m3.288s 00:12:00.813 23:53:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.813 23:53:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.813 ************************************ 00:12:00.813 END TEST skip_rpc 00:12:00.813 ************************************ 00:12:00.813 23:53:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:12:00.813 23:53:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:00.813 23:53:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.813 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:12:00.813 ************************************ 00:12:00.813 START TEST rpc_client 00:12:00.813 ************************************ 00:12:00.813 23:53:39 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:12:00.813 * Looking for test storage... 00:12:00.813 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client 00:12:00.813 23:53:39 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:00.813 23:53:39 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:12:00.813 23:53:39 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@345 -- # : 1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@353 -- # local d=1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@355 -- # echo 1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@353 -- # local d=2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@355 -- # echo 2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.072 23:53:39 rpc_client -- scripts/common.sh@368 -- # return 0 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.072 --rc genhtml_branch_coverage=1 00:12:01.072 --rc genhtml_function_coverage=1 00:12:01.072 --rc genhtml_legend=1 00:12:01.072 --rc geninfo_all_blocks=1 00:12:01.072 --rc geninfo_unexecuted_blocks=1 00:12:01.072 00:12:01.072 ' 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.072 --rc genhtml_branch_coverage=1 00:12:01.072 --rc genhtml_function_coverage=1 00:12:01.072 --rc genhtml_legend=1 00:12:01.072 --rc geninfo_all_blocks=1 00:12:01.072 --rc geninfo_unexecuted_blocks=1 00:12:01.072 00:12:01.072 ' 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.072 --rc genhtml_branch_coverage=1 00:12:01.072 --rc genhtml_function_coverage=1 00:12:01.072 --rc genhtml_legend=1 00:12:01.072 --rc geninfo_all_blocks=1 00:12:01.072 --rc geninfo_unexecuted_blocks=1 00:12:01.072 00:12:01.072 ' 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.072 --rc genhtml_branch_coverage=1 00:12:01.072 --rc genhtml_function_coverage=1 00:12:01.072 --rc genhtml_legend=1 00:12:01.072 --rc geninfo_all_blocks=1 00:12:01.072 --rc geninfo_unexecuted_blocks=1 00:12:01.072 00:12:01.072 ' 00:12:01.072 23:53:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:12:01.072 OK 00:12:01.072 23:53:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:01.072 00:12:01.072 real 0m0.180s 00:12:01.072 user 0m0.117s 00:12:01.072 sys 0m0.071s 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.072 23:53:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 ************************************ 00:12:01.072 END TEST rpc_client 00:12:01.072 ************************************ 00:12:01.072 23:53:39 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh 00:12:01.072 23:53:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.072 23:53:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.072 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:12:01.072 ************************************ 00:12:01.072 START TEST json_config 00:12:01.072 ************************************ 00:12:01.072 23:53:39 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh 00:12:01.072 23:53:39 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.072 23:53:39 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.072 23:53:39 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.072 23:53:39 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.072 23:53:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.072 23:53:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.072 23:53:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.072 23:53:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.072 23:53:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.072 23:53:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.072 23:53:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.072 23:53:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.072 23:53:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.072 23:53:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.072 23:53:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.072 23:53:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:12:01.072 23:53:39 json_config -- scripts/common.sh@345 -- # : 1 00:12:01.072 23:53:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.072 23:53:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.072 23:53:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:12:01.330 23:53:39 json_config -- scripts/common.sh@353 -- # local d=1 00:12:01.330 23:53:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.330 23:53:39 json_config -- scripts/common.sh@355 -- # echo 1 00:12:01.330 23:53:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.330 23:53:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:12:01.330 23:53:39 json_config -- scripts/common.sh@353 -- # local d=2 00:12:01.330 23:53:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.330 23:53:39 json_config -- scripts/common.sh@355 -- # echo 2 00:12:01.331 23:53:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.331 23:53:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.331 23:53:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.331 23:53:39 json_config -- scripts/common.sh@368 -- # return 0 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.331 --rc genhtml_branch_coverage=1 00:12:01.331 --rc genhtml_function_coverage=1 00:12:01.331 --rc genhtml_legend=1 00:12:01.331 --rc geninfo_all_blocks=1 00:12:01.331 --rc geninfo_unexecuted_blocks=1 00:12:01.331 00:12:01.331 ' 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.331 --rc genhtml_branch_coverage=1 00:12:01.331 --rc genhtml_function_coverage=1 00:12:01.331 --rc genhtml_legend=1 00:12:01.331 --rc geninfo_all_blocks=1 00:12:01.331 --rc geninfo_unexecuted_blocks=1 00:12:01.331 00:12:01.331 ' 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.331 --rc genhtml_branch_coverage=1 00:12:01.331 --rc genhtml_function_coverage=1 00:12:01.331 --rc genhtml_legend=1 00:12:01.331 --rc geninfo_all_blocks=1 00:12:01.331 --rc geninfo_unexecuted_blocks=1 00:12:01.331 00:12:01.331 ' 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.331 --rc genhtml_branch_coverage=1 00:12:01.331 --rc genhtml_function_coverage=1 00:12:01.331 --rc genhtml_legend=1 00:12:01.331 --rc geninfo_all_blocks=1 00:12:01.331 --rc geninfo_unexecuted_blocks=1 00:12:01.331 00:12:01.331 ' 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a882507-757a-e411-bc42-001e67d39171 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4a882507-757a-e411-bc42-001e67d39171 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:12:01.331 23:53:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.331 23:53:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.331 23:53:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.331 23:53:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.331 23:53:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.331 23:53:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.331 23:53:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.331 23:53:39 json_config -- paths/export.sh@5 -- # export PATH 00:12:01.331 23:53:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@51 -- # : 0 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.331 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.331 23:53:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/common.sh 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:12:01.331 WARNING: No tests are enabled so not running JSON configuration tests 00:12:01.331 23:53:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:12:01.331 00:12:01.331 real 0m0.162s 00:12:01.331 user 0m0.116s 00:12:01.331 sys 0m0.051s 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.331 23:53:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:01.331 ************************************ 00:12:01.331 END TEST json_config 00:12:01.331 ************************************ 00:12:01.331 23:53:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:12:01.331 23:53:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.331 23:53:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.331 23:53:39 -- common/autotest_common.sh@10 -- # set +x 00:12:01.331 ************************************ 00:12:01.331 START TEST json_config_extra_key 00:12:01.331 ************************************ 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.331 23:53:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.331 23:53:39 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.332 --rc genhtml_branch_coverage=1 00:12:01.332 --rc genhtml_function_coverage=1 00:12:01.332 --rc genhtml_legend=1 00:12:01.332 --rc geninfo_all_blocks=1 00:12:01.332 --rc geninfo_unexecuted_blocks=1 00:12:01.332 00:12:01.332 ' 00:12:01.332 23:53:39 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.332 --rc genhtml_branch_coverage=1 00:12:01.332 --rc genhtml_function_coverage=1 00:12:01.332 --rc genhtml_legend=1 00:12:01.332 --rc geninfo_all_blocks=1 00:12:01.332 --rc geninfo_unexecuted_blocks=1 00:12:01.332 00:12:01.332 ' 00:12:01.332 23:53:39 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.332 --rc genhtml_branch_coverage=1 00:12:01.332 --rc genhtml_function_coverage=1 00:12:01.332 --rc genhtml_legend=1 00:12:01.332 --rc geninfo_all_blocks=1 00:12:01.332 --rc geninfo_unexecuted_blocks=1 00:12:01.332 00:12:01.332 ' 00:12:01.332 23:53:39 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.332 --rc genhtml_branch_coverage=1 00:12:01.332 --rc genhtml_function_coverage=1 00:12:01.332 --rc genhtml_legend=1 00:12:01.332 --rc geninfo_all_blocks=1 00:12:01.332 --rc geninfo_unexecuted_blocks=1 00:12:01.332 00:12:01.332 ' 00:12:01.332 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.332 23:53:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.590 23:53:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4a882507-757a-e411-bc42-001e67d39171 00:12:01.590 23:53:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4a882507-757a-e411-bc42-001e67d39171 00:12:01.590 23:53:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.590 23:53:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:12:01.591 23:53:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.591 23:53:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.591 23:53:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.591 23:53:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.591 23:53:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.591 23:53:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.591 23:53:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.591 23:53:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:01.591 23:53:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.591 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.591 23:53:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/common.sh 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json') 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:01.591 INFO: launching applications... 00:12:01.591 23:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=467933 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:01.591 Waiting for target to run... 00:12:01.591 23:53:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 467933 /var/tmp/spdk_tgt.sock 00:12:01.591 23:53:39 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 467933 ']' 00:12:01.591 23:53:39 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:01.591 23:53:39 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.591 23:53:39 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:01.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:01.591 23:53:39 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.591 23:53:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:01.591 [2024-12-09 23:53:39.927062] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:01.591 [2024-12-09 23:53:39.927174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467933 ] 00:12:02.157 [2024-12-09 23:53:40.519275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.157 [2024-12-09 23:53:40.615363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.415 [2024-12-09 23:53:40.765090] 'OCF_Core' volume operations registered 00:12:02.415 [2024-12-09 23:53:40.765147] 'OCF_Cache' volume operations registered 00:12:02.415 [2024-12-09 23:53:40.770660] 'OCF Composite' volume operations registered 00:12:02.415 [2024-12-09 23:53:40.776345] 'SPDK_block_device' volume operations registered 00:12:02.983 23:53:41 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.983 23:53:41 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:02.983 00:12:02.983 23:53:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:02.983 INFO: shutting down applications... 00:12:02.983 23:53:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 467933 ]] 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 467933 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 467933 00:12:02.983 23:53:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:03.240 23:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:03.240 23:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:03.240 23:53:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 467933 00:12:03.240 23:53:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 467933 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:03.806 23:53:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:03.806 SPDK target shutdown done 00:12:03.806 23:53:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:03.806 Success 00:12:03.806 00:12:03.806 real 0m2.538s 00:12:03.806 user 0m2.284s 00:12:03.806 sys 0m0.778s 00:12:03.806 23:53:42 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.806 23:53:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:03.806 ************************************ 00:12:03.806 END TEST json_config_extra_key 00:12:03.806 ************************************ 00:12:03.806 23:53:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:03.806 23:53:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.806 23:53:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.806 23:53:42 -- common/autotest_common.sh@10 -- # set +x 00:12:03.806 ************************************ 00:12:03.806 START TEST alias_rpc 00:12:03.806 ************************************ 00:12:03.806 23:53:42 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:04.064 * Looking for test storage... 00:12:04.064 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc 00:12:04.064 23:53:42 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:04.064 23:53:42 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:04.064 23:53:42 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:04.064 23:53:42 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.064 23:53:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.065 23:53:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.065 --rc genhtml_branch_coverage=1 00:12:04.065 --rc genhtml_function_coverage=1 00:12:04.065 --rc genhtml_legend=1 00:12:04.065 --rc geninfo_all_blocks=1 00:12:04.065 --rc geninfo_unexecuted_blocks=1 00:12:04.065 00:12:04.065 ' 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.065 --rc genhtml_branch_coverage=1 00:12:04.065 --rc genhtml_function_coverage=1 00:12:04.065 --rc genhtml_legend=1 00:12:04.065 --rc geninfo_all_blocks=1 00:12:04.065 --rc geninfo_unexecuted_blocks=1 00:12:04.065 00:12:04.065 ' 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.065 --rc genhtml_branch_coverage=1 00:12:04.065 --rc genhtml_function_coverage=1 00:12:04.065 --rc genhtml_legend=1 00:12:04.065 --rc geninfo_all_blocks=1 00:12:04.065 --rc geninfo_unexecuted_blocks=1 00:12:04.065 00:12:04.065 ' 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.065 --rc genhtml_branch_coverage=1 00:12:04.065 --rc genhtml_function_coverage=1 00:12:04.065 --rc genhtml_legend=1 00:12:04.065 --rc geninfo_all_blocks=1 00:12:04.065 --rc geninfo_unexecuted_blocks=1 00:12:04.065 00:12:04.065 ' 00:12:04.065 23:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:04.065 23:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=468254 00:12:04.065 23:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:12:04.065 23:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 468254 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 468254 ']' 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.065 23:53:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.322 [2024-12-09 23:53:42.630282] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:04.322 [2024-12-09 23:53:42.630452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468254 ] 00:12:04.322 [2024-12-09 23:53:42.772915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.580 [2024-12-09 23:53:42.873936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.838 [2024-12-09 23:53:43.277176] 'OCF_Core' volume operations registered 00:12:04.839 [2024-12-09 23:53:43.277271] 'OCF_Cache' volume operations registered 00:12:04.839 [2024-12-09 23:53:43.286433] 'OCF Composite' volume operations registered 00:12:04.839 [2024-12-09 23:53:43.294779] 'SPDK_block_device' volume operations registered 00:12:05.096 23:53:43 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.096 23:53:43 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:05.096 23:53:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py load_config -i 00:12:05.353 23:53:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 468254 00:12:05.353 23:53:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 468254 ']' 00:12:05.353 23:53:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 468254 00:12:05.353 23:53:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:12:05.353 23:53:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.353 23:53:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468254 00:12:05.611 23:53:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.611 23:53:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.611 23:53:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468254' 00:12:05.611 killing process with pid 468254 00:12:05.612 23:53:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 468254 00:12:05.612 23:53:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 468254 00:12:06.177 00:12:06.177 real 0m2.344s 00:12:06.177 user 0m2.300s 00:12:06.177 sys 0m0.886s 00:12:06.177 23:53:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.177 23:53:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.177 ************************************ 00:12:06.177 END TEST alias_rpc 00:12:06.177 ************************************ 00:12:06.177 23:53:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:12:06.178 23:53:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh 00:12:06.178 23:53:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.178 23:53:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.178 23:53:44 -- common/autotest_common.sh@10 -- # set +x 00:12:06.178 ************************************ 00:12:06.178 START TEST spdkcli_tcp 00:12:06.178 ************************************ 00:12:06.178 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh 00:12:06.435 * Looking for test storage... 00:12:06.435 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli 00:12:06.435 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.435 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.435 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.693 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:12:06.693 23:53:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.694 23:53:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:12:06.694 23:53:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.694 23:53:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.694 23:53:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.694 23:53:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.694 --rc genhtml_branch_coverage=1 00:12:06.694 --rc genhtml_function_coverage=1 00:12:06.694 --rc genhtml_legend=1 00:12:06.694 --rc geninfo_all_blocks=1 00:12:06.694 --rc geninfo_unexecuted_blocks=1 00:12:06.694 00:12:06.694 ' 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.694 --rc genhtml_branch_coverage=1 00:12:06.694 --rc genhtml_function_coverage=1 00:12:06.694 --rc genhtml_legend=1 00:12:06.694 --rc geninfo_all_blocks=1 00:12:06.694 --rc geninfo_unexecuted_blocks=1 00:12:06.694 00:12:06.694 ' 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.694 --rc genhtml_branch_coverage=1 00:12:06.694 --rc genhtml_function_coverage=1 00:12:06.694 --rc genhtml_legend=1 00:12:06.694 --rc geninfo_all_blocks=1 00:12:06.694 --rc geninfo_unexecuted_blocks=1 00:12:06.694 00:12:06.694 ' 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.694 --rc genhtml_branch_coverage=1 00:12:06.694 --rc genhtml_function_coverage=1 00:12:06.694 --rc genhtml_legend=1 00:12:06.694 --rc geninfo_all_blocks=1 00:12:06.694 --rc geninfo_unexecuted_blocks=1 00:12:06.694 00:12:06.694 ' 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/common.sh 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/clear_config.py 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=468583 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:06.694 23:53:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 468583 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 468583 ']' 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.694 23:53:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.694 [2024-12-09 23:53:45.080637] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:06.694 [2024-12-09 23:53:45.080759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468583 ] 00:12:06.694 [2024-12-09 23:53:45.196449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.952 [2024-12-09 23:53:45.306911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.952 [2024-12-09 23:53:45.306916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.210 [2024-12-09 23:53:45.538031] 'OCF_Core' volume operations registered 00:12:07.210 [2024-12-09 23:53:45.538099] 'OCF_Cache' volume operations registered 00:12:07.210 [2024-12-09 23:53:45.542544] 'OCF Composite' volume operations registered 00:12:07.210 [2024-12-09 23:53:45.547110] 'SPDK_block_device' volume operations registered 00:12:07.210 23:53:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.210 23:53:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:12:07.210 23:53:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=468715 00:12:07.210 23:53:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:07.210 23:53:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:07.470 [ 00:12:07.470 "bdev_malloc_delete", 00:12:07.470 "bdev_malloc_create", 00:12:07.470 "bdev_null_resize", 00:12:07.470 "bdev_null_delete", 00:12:07.470 "bdev_null_create", 00:12:07.470 "bdev_nvme_cuse_unregister", 00:12:07.470 "bdev_nvme_cuse_register", 00:12:07.470 "bdev_opal_new_user", 00:12:07.470 "bdev_opal_set_lock_state", 00:12:07.470 "bdev_opal_delete", 00:12:07.470 "bdev_opal_get_info", 00:12:07.470 "bdev_opal_create", 00:12:07.470 "bdev_nvme_opal_revert", 00:12:07.470 "bdev_nvme_opal_init", 00:12:07.470 "bdev_nvme_send_cmd", 00:12:07.470 "bdev_nvme_set_keys", 00:12:07.470 "bdev_nvme_get_path_iostat", 00:12:07.470 "bdev_nvme_get_mdns_discovery_info", 00:12:07.470 "bdev_nvme_stop_mdns_discovery", 00:12:07.470 "bdev_nvme_start_mdns_discovery", 00:12:07.470 "bdev_nvme_set_multipath_policy", 00:12:07.470 "bdev_nvme_set_preferred_path", 00:12:07.470 "bdev_nvme_get_io_paths", 00:12:07.470 "bdev_nvme_remove_error_injection", 00:12:07.470 "bdev_nvme_add_error_injection", 00:12:07.470 "bdev_nvme_get_discovery_info", 00:12:07.470 "bdev_nvme_stop_discovery", 00:12:07.470 "bdev_nvme_start_discovery", 00:12:07.470 "bdev_nvme_get_controller_health_info", 00:12:07.470 "bdev_nvme_disable_controller", 00:12:07.470 "bdev_nvme_enable_controller", 00:12:07.470 "bdev_nvme_reset_controller", 00:12:07.470 "bdev_nvme_get_transport_statistics", 00:12:07.470 "bdev_nvme_apply_firmware", 00:12:07.470 "bdev_nvme_detach_controller", 00:12:07.470 "bdev_nvme_get_controllers", 00:12:07.470 "bdev_nvme_attach_controller", 00:12:07.470 "bdev_nvme_set_hotplug", 00:12:07.470 "bdev_nvme_set_options", 00:12:07.470 "bdev_passthru_delete", 00:12:07.470 "bdev_passthru_create", 00:12:07.470 "bdev_lvol_set_parent_bdev", 00:12:07.470 "bdev_lvol_set_parent", 00:12:07.470 "bdev_lvol_check_shallow_copy", 00:12:07.470 "bdev_lvol_start_shallow_copy", 00:12:07.470 "bdev_lvol_grow_lvstore", 00:12:07.470 "bdev_lvol_get_lvols", 00:12:07.470 "bdev_lvol_get_lvstores", 00:12:07.470 "bdev_lvol_delete", 00:12:07.470 "bdev_lvol_set_read_only", 00:12:07.470 "bdev_lvol_resize", 00:12:07.470 "bdev_lvol_decouple_parent", 00:12:07.470 "bdev_lvol_inflate", 00:12:07.470 "bdev_lvol_rename", 00:12:07.470 "bdev_lvol_clone_bdev", 00:12:07.470 "bdev_lvol_clone", 00:12:07.470 "bdev_lvol_snapshot", 00:12:07.470 "bdev_lvol_create", 00:12:07.470 "bdev_lvol_delete_lvstore", 00:12:07.470 "bdev_lvol_rename_lvstore", 00:12:07.470 "bdev_lvol_create_lvstore", 00:12:07.470 "bdev_raid_set_options", 00:12:07.470 "bdev_raid_remove_base_bdev", 00:12:07.470 "bdev_raid_add_base_bdev", 00:12:07.470 "bdev_raid_delete", 00:12:07.470 "bdev_raid_create", 00:12:07.470 "bdev_raid_get_bdevs", 00:12:07.470 "bdev_error_inject_error", 00:12:07.470 "bdev_error_delete", 00:12:07.470 "bdev_error_create", 00:12:07.470 "bdev_split_delete", 00:12:07.470 "bdev_split_create", 00:12:07.470 "bdev_delay_delete", 00:12:07.470 "bdev_delay_create", 00:12:07.470 "bdev_delay_update_latency", 00:12:07.470 "bdev_zone_block_delete", 00:12:07.470 "bdev_zone_block_create", 00:12:07.470 "blobfs_create", 00:12:07.471 "blobfs_detect", 00:12:07.471 "blobfs_set_cache_size", 00:12:07.471 "bdev_ocf_flush_status", 00:12:07.471 "bdev_ocf_flush_start", 00:12:07.471 "bdev_ocf_set_seqcutoff", 00:12:07.471 "bdev_ocf_set_cache_mode", 00:12:07.471 "bdev_ocf_get_bdevs", 00:12:07.471 "bdev_ocf_reset_stats", 00:12:07.471 "bdev_ocf_get_stats", 00:12:07.471 "bdev_ocf_delete", 00:12:07.471 "bdev_ocf_create", 00:12:07.471 "bdev_aio_delete", 00:12:07.471 "bdev_aio_rescan", 00:12:07.471 "bdev_aio_create", 00:12:07.471 "bdev_ftl_set_property", 00:12:07.471 "bdev_ftl_get_properties", 00:12:07.471 "bdev_ftl_get_stats", 00:12:07.471 "bdev_ftl_unmap", 00:12:07.471 "bdev_ftl_unload", 00:12:07.471 "bdev_ftl_delete", 00:12:07.471 "bdev_ftl_load", 00:12:07.471 "bdev_ftl_create", 00:12:07.471 "bdev_virtio_attach_controller", 00:12:07.471 "bdev_virtio_scsi_get_devices", 00:12:07.471 "bdev_virtio_detach_controller", 00:12:07.471 "bdev_virtio_blk_set_hotplug", 00:12:07.471 "bdev_iscsi_delete", 00:12:07.471 "bdev_iscsi_create", 00:12:07.471 "bdev_iscsi_set_options", 00:12:07.471 "accel_error_inject_error", 00:12:07.471 "ioat_scan_accel_module", 00:12:07.471 "dsa_scan_accel_module", 00:12:07.471 "iaa_scan_accel_module", 00:12:07.471 "keyring_file_remove_key", 00:12:07.471 "keyring_file_add_key", 00:12:07.471 "keyring_linux_set_options", 00:12:07.471 "fsdev_aio_delete", 00:12:07.471 "fsdev_aio_create", 00:12:07.471 "iscsi_get_histogram", 00:12:07.471 "iscsi_enable_histogram", 00:12:07.471 "iscsi_set_options", 00:12:07.471 "iscsi_get_auth_groups", 00:12:07.471 "iscsi_auth_group_remove_secret", 00:12:07.471 "iscsi_auth_group_add_secret", 00:12:07.471 "iscsi_delete_auth_group", 00:12:07.471 "iscsi_create_auth_group", 00:12:07.471 "iscsi_set_discovery_auth", 00:12:07.471 "iscsi_get_options", 00:12:07.471 "iscsi_target_node_request_logout", 00:12:07.471 "iscsi_target_node_set_redirect", 00:12:07.471 "iscsi_target_node_set_auth", 00:12:07.471 "iscsi_target_node_add_lun", 00:12:07.471 "iscsi_get_stats", 00:12:07.471 "iscsi_get_connections", 00:12:07.471 "iscsi_portal_group_set_auth", 00:12:07.471 "iscsi_start_portal_group", 00:12:07.471 "iscsi_delete_portal_group", 00:12:07.471 "iscsi_create_portal_group", 00:12:07.471 "iscsi_get_portal_groups", 00:12:07.471 "iscsi_delete_target_node", 00:12:07.471 "iscsi_target_node_remove_pg_ig_maps", 00:12:07.471 "iscsi_target_node_add_pg_ig_maps", 00:12:07.471 "iscsi_create_target_node", 00:12:07.471 "iscsi_get_target_nodes", 00:12:07.471 "iscsi_delete_initiator_group", 00:12:07.471 "iscsi_initiator_group_remove_initiators", 00:12:07.471 "iscsi_initiator_group_add_initiators", 00:12:07.471 "iscsi_create_initiator_group", 00:12:07.471 "iscsi_get_initiator_groups", 00:12:07.471 "nvmf_set_crdt", 00:12:07.471 "nvmf_set_config", 00:12:07.471 "nvmf_set_max_subsystems", 00:12:07.471 "nvmf_stop_mdns_prr", 00:12:07.471 "nvmf_publish_mdns_prr", 00:12:07.471 "nvmf_subsystem_get_listeners", 00:12:07.471 "nvmf_subsystem_get_qpairs", 00:12:07.471 "nvmf_subsystem_get_controllers", 00:12:07.471 "nvmf_get_stats", 00:12:07.471 "nvmf_get_transports", 00:12:07.471 "nvmf_create_transport", 00:12:07.471 "nvmf_get_targets", 00:12:07.471 "nvmf_delete_target", 00:12:07.471 "nvmf_create_target", 00:12:07.471 "nvmf_subsystem_allow_any_host", 00:12:07.471 "nvmf_subsystem_set_keys", 00:12:07.471 "nvmf_subsystem_remove_host", 00:12:07.471 "nvmf_subsystem_add_host", 00:12:07.471 "nvmf_ns_remove_host", 00:12:07.471 "nvmf_ns_add_host", 00:12:07.471 "nvmf_subsystem_remove_ns", 00:12:07.471 "nvmf_subsystem_set_ns_ana_group", 00:12:07.471 "nvmf_subsystem_add_ns", 00:12:07.471 "nvmf_subsystem_listener_set_ana_state", 00:12:07.471 "nvmf_discovery_get_referrals", 00:12:07.471 "nvmf_discovery_remove_referral", 00:12:07.471 "nvmf_discovery_add_referral", 00:12:07.471 "nvmf_subsystem_remove_listener", 00:12:07.471 "nvmf_subsystem_add_listener", 00:12:07.471 "nvmf_delete_subsystem", 00:12:07.471 "nvmf_create_subsystem", 00:12:07.471 "nvmf_get_subsystems", 00:12:07.471 "env_dpdk_get_mem_stats", 00:12:07.471 "nbd_get_disks", 00:12:07.471 "nbd_stop_disk", 00:12:07.471 "nbd_start_disk", 00:12:07.471 "ublk_recover_disk", 00:12:07.471 "ublk_get_disks", 00:12:07.471 "ublk_stop_disk", 00:12:07.471 "ublk_start_disk", 00:12:07.471 "ublk_destroy_target", 00:12:07.471 "ublk_create_target", 00:12:07.471 "virtio_blk_create_transport", 00:12:07.471 "virtio_blk_get_transports", 00:12:07.471 "vhost_controller_set_coalescing", 00:12:07.471 "vhost_get_controllers", 00:12:07.471 "vhost_delete_controller", 00:12:07.471 "vhost_create_blk_controller", 00:12:07.471 "vhost_scsi_controller_remove_target", 00:12:07.471 "vhost_scsi_controller_add_target", 00:12:07.471 "vhost_start_scsi_controller", 00:12:07.471 "vhost_create_scsi_controller", 00:12:07.471 "thread_set_cpumask", 00:12:07.471 "scheduler_set_options", 00:12:07.471 "framework_get_governor", 00:12:07.471 "framework_get_scheduler", 00:12:07.471 "framework_set_scheduler", 00:12:07.471 "framework_get_reactors", 00:12:07.471 "thread_get_io_channels", 00:12:07.471 "thread_get_pollers", 00:12:07.471 "thread_get_stats", 00:12:07.471 "framework_monitor_context_switch", 00:12:07.471 "spdk_kill_instance", 00:12:07.471 "log_enable_timestamps", 00:12:07.471 "log_get_flags", 00:12:07.471 "log_clear_flag", 00:12:07.471 "log_set_flag", 00:12:07.471 "log_get_level", 00:12:07.471 "log_set_level", 00:12:07.471 "log_get_print_level", 00:12:07.471 "log_set_print_level", 00:12:07.471 "framework_enable_cpumask_locks", 00:12:07.471 "framework_disable_cpumask_locks", 00:12:07.471 "framework_wait_init", 00:12:07.471 "framework_start_init", 00:12:07.471 "scsi_get_devices", 00:12:07.471 "bdev_get_histogram", 00:12:07.471 "bdev_enable_histogram", 00:12:07.471 "bdev_set_qos_limit", 00:12:07.471 "bdev_set_qd_sampling_period", 00:12:07.471 "bdev_get_bdevs", 00:12:07.471 "bdev_reset_iostat", 00:12:07.471 "bdev_get_iostat", 00:12:07.471 "bdev_examine", 00:12:07.471 "bdev_wait_for_examine", 00:12:07.471 "bdev_set_options", 00:12:07.471 "accel_get_stats", 00:12:07.471 "accel_set_options", 00:12:07.471 "accel_set_driver", 00:12:07.471 "accel_crypto_key_destroy", 00:12:07.471 "accel_crypto_keys_get", 00:12:07.471 "accel_crypto_key_create", 00:12:07.472 "accel_assign_opc", 00:12:07.472 "accel_get_module_info", 00:12:07.472 "accel_get_opc_assignments", 00:12:07.472 "vmd_rescan", 00:12:07.472 "vmd_remove_device", 00:12:07.472 "vmd_enable", 00:12:07.472 "sock_get_default_impl", 00:12:07.472 "sock_set_default_impl", 00:12:07.472 "sock_impl_set_options", 00:12:07.472 "sock_impl_get_options", 00:12:07.472 "iobuf_get_stats", 00:12:07.472 "iobuf_set_options", 00:12:07.472 "keyring_get_keys", 00:12:07.472 "framework_get_pci_devices", 00:12:07.472 "framework_get_config", 00:12:07.472 "framework_get_subsystems", 00:12:07.472 "fsdev_set_opts", 00:12:07.472 "fsdev_get_opts", 00:12:07.472 "trace_get_info", 00:12:07.472 "trace_get_tpoint_group_mask", 00:12:07.472 "trace_disable_tpoint_group", 00:12:07.472 "trace_enable_tpoint_group", 00:12:07.472 "trace_clear_tpoint_mask", 00:12:07.472 "trace_set_tpoint_mask", 00:12:07.472 "notify_get_notifications", 00:12:07.472 "notify_get_types", 00:12:07.472 "spdk_get_version", 00:12:07.472 "rpc_get_methods" 00:12:07.472 ] 00:12:07.730 23:53:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:07.730 23:53:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.730 23:53:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.730 23:53:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:07.730 23:53:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 468583 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 468583 ']' 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 468583 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468583 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468583' 00:12:07.730 killing process with pid 468583 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 468583 00:12:07.730 23:53:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 468583 00:12:08.296 00:12:08.296 real 0m2.008s 00:12:08.296 user 0m3.504s 00:12:08.296 sys 0m0.719s 00:12:08.296 23:53:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.296 23:53:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.296 ************************************ 00:12:08.296 END TEST spdkcli_tcp 00:12:08.296 ************************************ 00:12:08.296 23:53:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:08.296 23:53:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.296 23:53:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.296 23:53:46 -- common/autotest_common.sh@10 -- # set +x 00:12:08.296 ************************************ 00:12:08.296 START TEST dpdk_mem_utility 00:12:08.296 ************************************ 00:12:08.296 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:08.296 * Looking for test storage... 00:12:08.296 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility 00:12:08.296 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.296 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.296 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.554 23:53:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.554 --rc genhtml_branch_coverage=1 00:12:08.554 --rc genhtml_function_coverage=1 00:12:08.554 --rc genhtml_legend=1 00:12:08.554 --rc geninfo_all_blocks=1 00:12:08.554 --rc geninfo_unexecuted_blocks=1 00:12:08.554 00:12:08.554 ' 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.554 --rc genhtml_branch_coverage=1 00:12:08.554 --rc genhtml_function_coverage=1 00:12:08.554 --rc genhtml_legend=1 00:12:08.554 --rc geninfo_all_blocks=1 00:12:08.554 --rc geninfo_unexecuted_blocks=1 00:12:08.554 00:12:08.554 ' 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.554 --rc genhtml_branch_coverage=1 00:12:08.554 --rc genhtml_function_coverage=1 00:12:08.554 --rc genhtml_legend=1 00:12:08.554 --rc geninfo_all_blocks=1 00:12:08.554 --rc geninfo_unexecuted_blocks=1 00:12:08.554 00:12:08.554 ' 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.554 --rc genhtml_branch_coverage=1 00:12:08.554 --rc genhtml_function_coverage=1 00:12:08.554 --rc genhtml_legend=1 00:12:08.554 --rc geninfo_all_blocks=1 00:12:08.554 --rc geninfo_unexecuted_blocks=1 00:12:08.554 00:12:08.554 ' 00:12:08.554 23:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:12:08.554 23:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=468920 00:12:08.554 23:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:12:08.554 23:53:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 468920 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 468920 ']' 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.554 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.555 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.555 23:53:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:08.555 [2024-12-09 23:53:46.972508] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:08.555 [2024-12-09 23:53:46.972610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468920 ] 00:12:08.555 [2024-12-09 23:53:47.069640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.812 [2024-12-09 23:53:47.160915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.071 [2024-12-09 23:53:47.522886] 'OCF_Core' volume operations registered 00:12:09.071 [2024-12-09 23:53:47.522935] 'OCF_Cache' volume operations registered 00:12:09.071 [2024-12-09 23:53:47.529905] 'OCF Composite' volume operations registered 00:12:09.071 [2024-12-09 23:53:47.536915] 'SPDK_block_device' volume operations registered 00:12:09.329 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.329 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:12:09.329 23:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:09.329 23:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:09.329 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.329 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:09.329 { 00:12:09.329 "filename": "/tmp/spdk_mem_dump.txt" 00:12:09.329 } 00:12:09.329 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.329 23:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:12:09.588 DPDK memory size 1200.000000 MiB in 1 heap(s) 00:12:09.588 1 heaps totaling size 1200.000000 MiB 00:12:09.588 size: 1200.000000 MiB heap id: 0 00:12:09.588 end heaps---------- 00:12:09.588 26 mempools totaling size 958.039612 MiB 00:12:09.588 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:09.588 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:09.588 size: 100.555481 MiB name: bdev_io_468920 00:12:09.588 size: 76.286926 MiB name: ocf_env_12:ocf_mio_8 00:12:09.588 size: 58.218811 MiB name: ocf_env_8:ocf_req_128 00:12:09.588 size: 50.003479 MiB name: msgpool_468920 00:12:09.588 size: 40.142639 MiB name: ocf_env_11:ocf_mio_4 00:12:09.588 size: 36.509338 MiB name: fsdev_io_468920 00:12:09.588 size: 34.164612 MiB name: ocf_env_7:ocf_req_64 00:12:09.588 size: 22.138245 MiB name: ocf_env_6:ocf_req_32 00:12:09.588 size: 22.138245 MiB name: ocf_env_10:ocf_mio_2 00:12:09.588 size: 21.763794 MiB name: PDU_Pool 00:12:09.588 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:09.588 size: 16.136780 MiB name: ocf_env_5:ocf_req_16 00:12:09.588 size: 14.136292 MiB name: ocf_env_4:ocf_req_8 00:12:09.588 size: 14.136292 MiB name: ocf_env_9:ocf_mio_1 00:12:09.589 size: 12.136414 MiB name: ocf_env_3:ocf_req_4 00:12:09.589 size: 10.135315 MiB name: ocf_env_1:ocf_req_1 00:12:09.589 size: 10.135315 MiB name: ocf_env_2:ocf_req_2 00:12:09.589 size: 10.135315 MiB name: ocf_env_16:OCF Composit 00:12:09.589 size: 10.135315 MiB name: ocf_env_17:SPDK_block_d 00:12:09.589 size: 4.133484 MiB name: evtpool_468920 00:12:09.589 size: 1.609375 MiB name: ocf_env_15:ocf_mio_64 00:12:09.589 size: 1.310547 MiB name: ocf_env_14:ocf_mio_32 00:12:09.589 size: 1.161133 MiB name: ocf_env_13:ocf_mio_16 00:12:09.589 size: 0.026123 MiB name: Session_Pool 00:12:09.589 end mempools------- 00:12:09.589 6 memzones totaling size 4.142822 MiB 00:12:09.589 size: 1.000366 MiB name: RG_ring_0_468920 00:12:09.589 size: 1.000366 MiB name: RG_ring_1_468920 00:12:09.589 size: 1.000366 MiB name: RG_ring_4_468920 00:12:09.589 size: 1.000366 MiB name: RG_ring_5_468920 00:12:09.589 size: 0.125366 MiB name: RG_ring_2_468920 00:12:09.589 size: 0.015991 MiB name: RG_ring_3_468920 00:12:09.589 end memzones------- 00:12:09.589 23:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:12:09.589 heap id: 0 total size: 1200.000000 MiB number of busy elements: 119 number of free elements: 46 00:12:09.589 list of free elements. size: 38.574463 MiB 00:12:09.589 element at address: 0x200030a00000 with size: 0.999878 MiB 00:12:09.589 element at address: 0x200030e00000 with size: 0.999329 MiB 00:12:09.589 element at address: 0x200019200000 with size: 0.998718 MiB 00:12:09.589 element at address: 0x200000400000 with size: 0.998535 MiB 00:12:09.589 element at address: 0x200030000000 with size: 0.997742 MiB 00:12:09.589 element at address: 0x200019400000 with size: 0.997375 MiB 00:12:09.589 element at address: 0x200019e00000 with size: 0.997375 MiB 00:12:09.589 element at address: 0x20002f400000 with size: 0.997192 MiB 00:12:09.589 element at address: 0x20001b400000 with size: 0.996399 MiB 00:12:09.589 element at address: 0x200024e00000 with size: 0.996399 MiB 00:12:09.589 element at address: 0x20001a800000 with size: 0.996277 MiB 00:12:09.589 element at address: 0x20001c400000 with size: 0.995911 MiB 00:12:09.589 element at address: 0x20001d600000 with size: 0.994446 MiB 00:12:09.589 element at address: 0x200025e00000 with size: 0.994446 MiB 00:12:09.589 element at address: 0x200049e00000 with size: 0.994446 MiB 00:12:09.589 element at address: 0x200027600000 with size: 0.990051 MiB 00:12:09.589 element at address: 0x20001ee00000 with size: 0.968079 MiB 00:12:09.589 element at address: 0x20003fc00000 with size: 0.959961 MiB 00:12:09.589 element at address: 0x200030c00000 with size: 0.936584 MiB 00:12:09.589 element at address: 0x200021200000 with size: 0.913635 MiB 00:12:09.589 element at address: 0x20001c200000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x20001d400000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x20001ec00000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x200021000000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x200024c00000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x200025c00000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x200027400000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x200029e00000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x20002f200000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x20002fe00000 with size: 0.866211 MiB 00:12:09.589 element at address: 0x200006400000 with size: 0.866089 MiB 00:12:09.589 element at address: 0x20000a600000 with size: 0.866089 MiB 00:12:09.589 element at address: 0x200003e00000 with size: 0.857300 MiB 00:12:09.589 element at address: 0x20002a000000 with size: 0.845764 MiB 00:12:09.589 element at address: 0x20002ec00000 with size: 0.837769 MiB 00:12:09.589 element at address: 0x200012c00000 with size: 0.811157 MiB 00:12:09.589 element at address: 0x200000200000 with size: 0.717346 MiB 00:12:09.589 element at address: 0x20002ee00000 with size: 0.688354 MiB 00:12:09.589 element at address: 0x200032800000 with size: 0.582886 MiB 00:12:09.589 element at address: 0x200000c00000 with size: 0.495422 MiB 00:12:09.589 element at address: 0x200031000000 with size: 0.490845 MiB 00:12:09.589 element at address: 0x200049c00000 with size: 0.490845 MiB 00:12:09.589 element at address: 0x200031200000 with size: 0.485657 MiB 00:12:09.589 element at address: 0x20003fe00000 with size: 0.410034 MiB 00:12:09.589 element at address: 0x20002f000000 with size: 0.388977 MiB 00:12:09.589 element at address: 0x200000800000 with size: 0.355042 MiB 00:12:09.589 list of standard malloc elements. size: 199.232849 MiB 00:12:09.589 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:12:09.589 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:12:09.589 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:12:09.589 element at address: 0x200030afff80 with size: 1.000122 MiB 00:12:09.589 element at address: 0x200030cfff80 with size: 1.000122 MiB 00:12:09.589 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:12:09.589 element at address: 0x200030ceff00 with size: 0.062622 MiB 00:12:09.589 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:12:09.589 element at address: 0x2000192ffd40 with size: 0.000549 MiB 00:12:09.589 element at address: 0x200030cefdc0 with size: 0.000305 MiB 00:12:09.589 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:12:09.589 element at address: 0x2000212e9fc0 with size: 0.000244 MiB 00:12:09.589 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20000085b040 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20000085f300 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20000087f680 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200000cff000 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200003efb980 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200012cefc80 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000192ffac0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000192ffb80 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000194ff540 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000194ff600 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000194ff6c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200019eff540 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200019eff600 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200019eff6c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001a8ff0c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001a8ff180 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001a8ff240 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001b4ff140 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001b4ff200 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001b4ff2c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001c2fde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001c4fef40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001c4ff000 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001c4ff0c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001d4fde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001d6fe940 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001d6fea00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001d6feac0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001ecfde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001eef7d40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001eef7e00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x20001eef7ec0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000210fde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000212e9e40 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000212e9f00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000212ea0c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200024cfde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200024eff140 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200024eff200 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200024eff2c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200025cfde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200025efe940 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200025efea00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200025efeac0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000274fde00 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000276fd740 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000276fd800 with size: 0.000183 MiB 00:12:09.589 element at address: 0x2000276fd8c0 with size: 0.000183 MiB 00:12:09.589 element at address: 0x200029efde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002a0d8840 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002a0d8900 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002a0d89c0 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002ecd6780 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002ecd6840 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002ecd6900 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002ecfde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002eeb0380 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002eeb0440 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002eeb0500 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002eefde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f063940 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f063a00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f063ac0 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f063b80 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f063c40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f063d00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f0fde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f2fde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f4ff480 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f4ff540 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f4ff600 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002f4ff6c0 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20002fefde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x2000300ff6c0 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200030cefc40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200030cefd00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200030effd40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003107da80 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003107db40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x2000310fde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x2000312bc740 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200032895380 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200032895440 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003fcfde00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003fe68f80 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003fe69040 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003fe6fc40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003fe6fe40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x20003fe6ff00 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200049c7da80 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200049c7db40 with size: 0.000183 MiB 00:12:09.590 element at address: 0x200049cfde00 with size: 0.000183 MiB 00:12:09.590 list of memzone associated elements. size: 962.192688 MiB 00:12:09.590 element at address: 0x200032895500 with size: 211.416748 MiB 00:12:09.590 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:09.590 element at address: 0x20003fe6ffc0 with size: 157.562561 MiB 00:12:09.590 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:09.590 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:12:09.590 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_468920_0 00:12:09.590 element at address: 0x20002a0d8a80 with size: 75.153687 MiB 00:12:09.590 associated memzone info: size: 75.153564 MiB name: MP_ocf_env_12:ocf_mio_8_0 00:12:09.590 element at address: 0x2000212ea180 with size: 57.085571 MiB 00:12:09.590 associated memzone info: size: 57.085449 MiB name: MP_ocf_env_8:ocf_req_128_0 00:12:09.590 element at address: 0x200000dff380 with size: 48.003052 MiB 00:12:09.590 associated memzone info: size: 48.002930 MiB name: MP_msgpool_468920_0 00:12:09.590 element at address: 0x2000276fd980 with size: 39.009399 MiB 00:12:09.590 associated memzone info: size: 39.009277 MiB name: MP_ocf_env_11:ocf_mio_4_0 00:12:09.590 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:12:09.590 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_468920_0 00:12:09.590 element at address: 0x20001eef7f80 with size: 33.031372 MiB 00:12:09.590 associated memzone info: size: 33.031250 MiB name: MP_ocf_env_7:ocf_req_64_0 00:12:09.590 element at address: 0x20001d6feb80 with size: 21.005005 MiB 00:12:09.590 associated memzone info: size: 21.004883 MiB name: MP_ocf_env_6:ocf_req_32_0 00:12:09.590 element at address: 0x200025efeb80 with size: 21.005005 MiB 00:12:09.590 associated memzone info: size: 21.004883 MiB name: MP_ocf_env_10:ocf_mio_2_0 00:12:09.590 element at address: 0x2000313be940 with size: 20.255554 MiB 00:12:09.590 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:09.590 element at address: 0x200049ffeb40 with size: 18.005066 MiB 00:12:09.590 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:09.590 element at address: 0x20001c4ff180 with size: 15.003540 MiB 00:12:09.590 associated memzone info: size: 15.003418 MiB name: MP_ocf_env_5:ocf_req_16_0 00:12:09.590 element at address: 0x20001b4ff380 with size: 13.003052 MiB 00:12:09.590 associated memzone info: size: 13.002930 MiB name: MP_ocf_env_4:ocf_req_8_0 00:12:09.590 element at address: 0x200024eff380 with size: 13.003052 MiB 00:12:09.590 associated memzone info: size: 13.002930 MiB name: MP_ocf_env_9:ocf_mio_1_0 00:12:09.590 element at address: 0x20001a8ff300 with size: 11.003174 MiB 00:12:09.590 associated memzone info: size: 11.003052 MiB name: MP_ocf_env_3:ocf_req_4_0 00:12:09.590 element at address: 0x2000194ff780 with size: 9.002075 MiB 00:12:09.590 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_1:ocf_req_1_0 00:12:09.590 element at address: 0x200019eff780 with size: 9.002075 MiB 00:12:09.590 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_2:ocf_req_2_0 00:12:09.590 element at address: 0x20002f4ff780 with size: 9.002075 MiB 00:12:09.590 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_16:OCF Composit_0 00:12:09.590 element at address: 0x2000300ff780 with size: 9.002075 MiB 00:12:09.590 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_17:SPDK_block_d_0 00:12:09.590 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:12:09.590 associated memzone info: size: 3.000122 MiB name: MP_evtpool_468920_0 00:12:09.590 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:12:09.590 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_468920 00:12:09.590 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_evtpool_468920 00:12:09.590 element at address: 0x200012cefd40 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_1:ocf_req_1 00:12:09.590 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_2:ocf_req_2 00:12:09.590 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_3:ocf_req_4 00:12:09.590 element at address: 0x200003efba40 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_4:ocf_req_8 00:12:09.590 element at address: 0x20001c2fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_5:ocf_req_16 00:12:09.590 element at address: 0x20001d4fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_6:ocf_req_32 00:12:09.590 element at address: 0x20001ecfdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_7:ocf_req_64 00:12:09.590 element at address: 0x2000210fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_8:ocf_req_128 00:12:09.590 element at address: 0x200024cfdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_9:ocf_mio_1 00:12:09.590 element at address: 0x200025cfdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_10:ocf_mio_2 00:12:09.590 element at address: 0x2000274fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_11:ocf_mio_4 00:12:09.590 element at address: 0x200029efdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_12:ocf_mio_8 00:12:09.590 element at address: 0x20002ecfdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_13:ocf_mio_16 00:12:09.590 element at address: 0x20002eefdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_14:ocf_mio_32 00:12:09.590 element at address: 0x20002f0fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_15:ocf_mio_64 00:12:09.590 element at address: 0x20002f2fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_16:OCF Composit 00:12:09.590 element at address: 0x20002fefdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_17:SPDK_block_d 00:12:09.590 element at address: 0x2000310fdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:09.590 element at address: 0x2000312bc800 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:09.590 element at address: 0x20003fcfdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:09.590 element at address: 0x200049cfdec0 with size: 1.008118 MiB 00:12:09.590 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:09.590 element at address: 0x200000cff180 with size: 1.000488 MiB 00:12:09.590 associated memzone info: size: 1.000366 MiB name: RG_ring_0_468920 00:12:09.590 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:12:09.590 associated memzone info: size: 1.000366 MiB name: RG_ring_1_468920 00:12:09.590 element at address: 0x200030effe00 with size: 1.000488 MiB 00:12:09.590 associated memzone info: size: 1.000366 MiB name: RG_ring_4_468920 00:12:09.590 element at address: 0x200049efe940 with size: 1.000488 MiB 00:12:09.590 associated memzone info: size: 1.000366 MiB name: RG_ring_5_468920 00:12:09.590 element at address: 0x20002f063dc0 with size: 0.600891 MiB 00:12:09.590 associated memzone info: size: 0.600769 MiB name: MP_ocf_env_15:ocf_mio_64_0 00:12:09.590 element at address: 0x20000087f740 with size: 0.500488 MiB 00:12:09.590 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_468920 00:12:09.590 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:12:09.590 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_468920 00:12:09.591 element at address: 0x20003107dc00 with size: 0.500488 MiB 00:12:09.591 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:09.591 element at address: 0x200049c7dc00 with size: 0.500488 MiB 00:12:09.591 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:09.591 element at address: 0x20002eeb05c0 with size: 0.302063 MiB 00:12:09.591 associated memzone info: size: 0.301941 MiB name: MP_ocf_env_14:ocf_mio_32_0 00:12:09.591 element at address: 0x20003127c540 with size: 0.250488 MiB 00:12:09.591 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:09.591 element at address: 0x20002ecd69c0 with size: 0.152649 MiB 00:12:09.591 associated memzone info: size: 0.152527 MiB name: MP_ocf_env_13:ocf_mio_16_0 00:12:09.591 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_468920 00:12:09.591 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_ring_2_468920 00:12:09.591 element at address: 0x200012ccfa80 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_1:ocf_req_1 00:12:09.591 element at address: 0x20000a6ddb80 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_2:ocf_req_2 00:12:09.591 element at address: 0x2000064ddb80 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_3:ocf_req_4 00:12:09.591 element at address: 0x200003edb780 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_4:ocf_req_8 00:12:09.591 element at address: 0x20001c2ddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_5:ocf_req_16 00:12:09.591 element at address: 0x20001d4ddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_6:ocf_req_32 00:12:09.591 element at address: 0x20001ecddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_7:ocf_req_64 00:12:09.591 element at address: 0x2000210ddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_8:ocf_req_128 00:12:09.591 element at address: 0x200024cddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_9:ocf_mio_1 00:12:09.591 element at address: 0x200025cddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_10:ocf_mio_2 00:12:09.591 element at address: 0x2000274ddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_11:ocf_mio_4 00:12:09.591 element at address: 0x200029eddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_12:ocf_mio_8 00:12:09.591 element at address: 0x20002f2ddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_16:OCF Composit 00:12:09.591 element at address: 0x20002feddc00 with size: 0.125488 MiB 00:12:09.591 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_17:SPDK_block_d 00:12:09.591 element at address: 0x20003fcf5c00 with size: 0.031738 MiB 00:12:09.591 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:09.591 element at address: 0x20003fe69100 with size: 0.023743 MiB 00:12:09.591 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:09.591 element at address: 0x20000085b100 with size: 0.016113 MiB 00:12:09.591 associated memzone info: size: 0.015991 MiB name: RG_ring_3_468920 00:12:09.591 element at address: 0x20003fe6f240 with size: 0.002441 MiB 00:12:09.591 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:09.591 element at address: 0x20002ecfdb00 with size: 0.000732 MiB 00:12:09.591 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_13:ocf_mio_16 00:12:09.591 element at address: 0x20002eefdb00 with size: 0.000732 MiB 00:12:09.591 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_14:ocf_mio_32 00:12:09.591 element at address: 0x20002f0fdb00 with size: 0.000732 MiB 00:12:09.591 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_15:ocf_mio_64 00:12:09.591 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:12:09.591 associated memzone info: size: 0.000183 MiB name: MP_msgpool_468920 00:12:09.591 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:12:09.591 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_468920 00:12:09.591 element at address: 0x20000085af00 with size: 0.000305 MiB 00:12:09.591 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_468920 00:12:09.591 element at address: 0x20003fe6fd00 with size: 0.000305 MiB 00:12:09.591 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:09.591 23:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:09.591 23:53:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 468920 00:12:09.591 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 468920 ']' 00:12:09.591 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 468920 00:12:09.591 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:12:09.591 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.591 23:53:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 468920 00:12:09.591 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.591 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.591 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 468920' 00:12:09.591 killing process with pid 468920 00:12:09.591 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 468920 00:12:09.591 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 468920 00:12:10.525 00:12:10.525 real 0m1.978s 00:12:10.525 user 0m1.935s 00:12:10.525 sys 0m0.759s 00:12:10.525 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.525 23:53:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:10.525 ************************************ 00:12:10.525 END TEST dpdk_mem_utility 00:12:10.525 ************************************ 00:12:10.525 23:53:48 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh 00:12:10.525 23:53:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.525 23:53:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.525 23:53:48 -- common/autotest_common.sh@10 -- # set +x 00:12:10.525 ************************************ 00:12:10.525 START TEST event 00:12:10.525 ************************************ 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh 00:12:10.525 * Looking for test storage... 00:12:10.525 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1711 -- # lcov --version 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:10.525 23:53:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.525 23:53:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.525 23:53:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.525 23:53:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.525 23:53:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.525 23:53:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.525 23:53:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.525 23:53:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.525 23:53:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.525 23:53:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.525 23:53:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.525 23:53:48 event -- scripts/common.sh@344 -- # case "$op" in 00:12:10.525 23:53:48 event -- scripts/common.sh@345 -- # : 1 00:12:10.525 23:53:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.525 23:53:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.525 23:53:48 event -- scripts/common.sh@365 -- # decimal 1 00:12:10.525 23:53:48 event -- scripts/common.sh@353 -- # local d=1 00:12:10.525 23:53:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.525 23:53:48 event -- scripts/common.sh@355 -- # echo 1 00:12:10.525 23:53:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.525 23:53:48 event -- scripts/common.sh@366 -- # decimal 2 00:12:10.525 23:53:48 event -- scripts/common.sh@353 -- # local d=2 00:12:10.525 23:53:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.525 23:53:48 event -- scripts/common.sh@355 -- # echo 2 00:12:10.525 23:53:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.525 23:53:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.525 23:53:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.525 23:53:48 event -- scripts/common.sh@368 -- # return 0 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.525 --rc genhtml_branch_coverage=1 00:12:10.525 --rc genhtml_function_coverage=1 00:12:10.525 --rc genhtml_legend=1 00:12:10.525 --rc geninfo_all_blocks=1 00:12:10.525 --rc geninfo_unexecuted_blocks=1 00:12:10.525 00:12:10.525 ' 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.525 --rc genhtml_branch_coverage=1 00:12:10.525 --rc genhtml_function_coverage=1 00:12:10.525 --rc genhtml_legend=1 00:12:10.525 --rc geninfo_all_blocks=1 00:12:10.525 --rc geninfo_unexecuted_blocks=1 00:12:10.525 00:12:10.525 ' 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.525 --rc genhtml_branch_coverage=1 00:12:10.525 --rc genhtml_function_coverage=1 00:12:10.525 --rc genhtml_legend=1 00:12:10.525 --rc geninfo_all_blocks=1 00:12:10.525 --rc geninfo_unexecuted_blocks=1 00:12:10.525 00:12:10.525 ' 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:10.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.525 --rc genhtml_branch_coverage=1 00:12:10.525 --rc genhtml_function_coverage=1 00:12:10.525 --rc genhtml_legend=1 00:12:10.525 --rc geninfo_all_blocks=1 00:12:10.525 --rc geninfo_unexecuted_blocks=1 00:12:10.525 00:12:10.525 ' 00:12:10.525 23:53:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:12:10.525 23:53:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:10.525 23:53:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:10.525 23:53:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.525 23:53:48 event -- common/autotest_common.sh@10 -- # set +x 00:12:10.525 ************************************ 00:12:10.525 START TEST event_perf 00:12:10.525 ************************************ 00:12:10.525 23:53:48 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:10.525 Running I/O for 1 seconds...[2024-12-09 23:53:49.000050] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:10.525 [2024-12-09 23:53:49.000200] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469240 ] 00:12:10.783 [2024-12-09 23:53:49.081618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.783 [2024-12-09 23:53:49.148253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.783 [2024-12-09 23:53:49.148306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.783 [2024-12-09 23:53:49.148373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.783 [2024-12-09 23:53:49.148377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.716 Running I/O for 1 seconds... 00:12:11.717 lcore 0: 233006 00:12:11.717 lcore 1: 233004 00:12:11.717 lcore 2: 233004 00:12:11.717 lcore 3: 233005 00:12:11.975 done. 00:12:11.975 00:12:11.975 real 0m1.261s 00:12:11.976 user 0m4.170s 00:12:11.976 sys 0m0.084s 00:12:11.976 23:53:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.976 23:53:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:11.976 ************************************ 00:12:11.976 END TEST event_perf 00:12:11.976 ************************************ 00:12:11.976 23:53:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:12:11.976 23:53:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.976 23:53:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.976 23:53:50 event -- common/autotest_common.sh@10 -- # set +x 00:12:11.976 ************************************ 00:12:11.976 START TEST event_reactor 00:12:11.976 ************************************ 00:12:11.976 23:53:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:12:11.976 [2024-12-09 23:53:50.317039] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:11.976 [2024-12-09 23:53:50.317174] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469397 ] 00:12:11.976 [2024-12-09 23:53:50.456261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.234 [2024-12-09 23:53:50.561259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.168 test_start 00:12:13.168 oneshot 00:12:13.168 tick 100 00:12:13.168 tick 100 00:12:13.168 tick 250 00:12:13.168 tick 100 00:12:13.169 tick 100 00:12:13.169 tick 100 00:12:13.169 tick 250 00:12:13.169 tick 500 00:12:13.169 tick 100 00:12:13.169 tick 100 00:12:13.169 tick 250 00:12:13.169 tick 100 00:12:13.169 tick 100 00:12:13.169 test_end 00:12:13.169 00:12:13.169 real 0m1.360s 00:12:13.169 user 0m1.231s 00:12:13.169 sys 0m0.120s 00:12:13.169 23:53:51 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.169 23:53:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:13.169 ************************************ 00:12:13.169 END TEST event_reactor 00:12:13.169 ************************************ 00:12:13.169 23:53:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:13.169 23:53:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.169 23:53:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.169 23:53:51 event -- common/autotest_common.sh@10 -- # set +x 00:12:13.428 ************************************ 00:12:13.428 START TEST event_reactor_perf 00:12:13.428 ************************************ 00:12:13.428 23:53:51 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:13.428 [2024-12-09 23:53:51.724350] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:13.428 [2024-12-09 23:53:51.724414] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469554 ] 00:12:13.428 [2024-12-09 23:53:51.840482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.428 [2024-12-09 23:53:51.945033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.806 test_start 00:12:14.806 test_end 00:12:14.806 Performance: 199785 events per second 00:12:14.806 00:12:14.806 real 0m1.337s 00:12:14.806 user 0m1.223s 00:12:14.806 sys 0m0.104s 00:12:14.806 23:53:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.806 23:53:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:14.806 ************************************ 00:12:14.806 END TEST event_reactor_perf 00:12:14.806 ************************************ 00:12:14.806 23:53:53 event -- event/event.sh@49 -- # uname -s 00:12:14.806 23:53:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:14.806 23:53:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:12:14.806 23:53:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.806 23:53:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.806 23:53:53 event -- common/autotest_common.sh@10 -- # set +x 00:12:14.806 ************************************ 00:12:14.806 START TEST event_scheduler 00:12:14.806 ************************************ 00:12:14.806 23:53:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:12:14.806 * Looking for test storage... 00:12:14.806 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler 00:12:14.806 23:53:53 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.806 23:53:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.806 23:53:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.806 23:53:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.806 23:53:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.807 23:53:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.807 --rc genhtml_branch_coverage=1 00:12:14.807 --rc genhtml_function_coverage=1 00:12:14.807 --rc genhtml_legend=1 00:12:14.807 --rc geninfo_all_blocks=1 00:12:14.807 --rc geninfo_unexecuted_blocks=1 00:12:14.807 00:12:14.807 ' 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.807 --rc genhtml_branch_coverage=1 00:12:14.807 --rc genhtml_function_coverage=1 00:12:14.807 --rc genhtml_legend=1 00:12:14.807 --rc geninfo_all_blocks=1 00:12:14.807 --rc geninfo_unexecuted_blocks=1 00:12:14.807 00:12:14.807 ' 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.807 --rc genhtml_branch_coverage=1 00:12:14.807 --rc genhtml_function_coverage=1 00:12:14.807 --rc genhtml_legend=1 00:12:14.807 --rc geninfo_all_blocks=1 00:12:14.807 --rc geninfo_unexecuted_blocks=1 00:12:14.807 00:12:14.807 ' 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.807 --rc genhtml_branch_coverage=1 00:12:14.807 --rc genhtml_function_coverage=1 00:12:14.807 --rc genhtml_legend=1 00:12:14.807 --rc geninfo_all_blocks=1 00:12:14.807 --rc geninfo_unexecuted_blocks=1 00:12:14.807 00:12:14.807 ' 00:12:14.807 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:14.807 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=469800 00:12:14.807 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:14.807 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:14.807 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 469800 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 469800 ']' 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.807 23:53:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:15.066 [2024-12-09 23:53:53.361581] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:15.066 [2024-12-09 23:53:53.361753] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469800 ] 00:12:15.066 [2024-12-09 23:53:53.473298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.066 [2024-12-09 23:53:53.538044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.066 [2024-12-09 23:53:53.538100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.066 [2024-12-09 23:53:53.538166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.066 [2024-12-09 23:53:53.538169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:12:15.324 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:15.324 [2024-12-09 23:53:53.655137] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:12:15.324 [2024-12-09 23:53:53.655164] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:12:15.324 [2024-12-09 23:53:53.655180] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:15.324 [2024-12-09 23:53:53.655190] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:15.324 [2024-12-09 23:53:53.655202] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.324 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.324 23:53:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 [2024-12-09 23:53:53.874053] 'OCF_Core' volume operations registered 00:12:15.583 [2024-12-09 23:53:53.874116] 'OCF_Cache' volume operations registered 00:12:15.583 [2024-12-09 23:53:53.878544] 'OCF Composite' volume operations registered 00:12:15.583 [2024-12-09 23:53:53.883106] 'SPDK_block_device' volume operations registered 00:12:15.583 [2024-12-09 23:53:53.884302] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:15.583 23:53:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.583 23:53:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:15.583 23:53:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.583 23:53:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.583 23:53:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 ************************************ 00:12:15.583 START TEST scheduler_create_thread 00:12:15.583 ************************************ 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 2 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 3 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 4 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 5 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.583 6 00:12:15.583 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 7 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 8 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 9 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 10 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.584 00:12:15.584 real 0m0.112s 00:12:15.584 user 0m0.009s 00:12:15.584 sys 0m0.004s 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.584 23:53:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.584 ************************************ 00:12:15.584 END TEST scheduler_create_thread 00:12:15.584 ************************************ 00:12:15.584 23:53:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:15.584 23:53:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 469800 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 469800 ']' 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 469800 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 469800 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 469800' 00:12:15.584 killing process with pid 469800 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 469800 00:12:15.584 23:53:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 469800 00:12:16.150 [2024-12-09 23:53:54.507790] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:16.409 00:12:16.409 real 0m1.786s 00:12:16.409 user 0m2.124s 00:12:16.409 sys 0m0.527s 00:12:16.409 23:53:54 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.409 23:53:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:16.409 ************************************ 00:12:16.409 END TEST event_scheduler 00:12:16.409 ************************************ 00:12:16.409 23:53:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:16.409 23:53:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:16.409 23:53:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.409 23:53:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.409 23:53:54 event -- common/autotest_common.sh@10 -- # set +x 00:12:16.667 ************************************ 00:12:16.668 START TEST app_repeat 00:12:16.668 ************************************ 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=470047 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 470047' 00:12:16.668 Process app_repeat pid: 470047 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:16.668 spdk_app_start Round 0 00:12:16.668 23:53:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 470047 /var/tmp/spdk-nbd.sock 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 470047 ']' 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:16.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.668 23:53:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:16.668 [2024-12-09 23:53:54.985471] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:16.668 [2024-12-09 23:53:54.985552] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470047 ] 00:12:16.668 [2024-12-09 23:53:55.092997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:16.668 [2024-12-09 23:53:55.182935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.668 [2024-12-09 23:53:55.182940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.926 23:53:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.926 23:53:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:16.926 23:53:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:17.492 Malloc0 00:12:17.492 23:53:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:18.424 Malloc1 00:12:18.424 23:53:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.424 23:53:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:18.990 /dev/nbd0 00:12:18.990 23:53:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.990 23:53:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:18.990 1+0 records in 00:12:18.990 1+0 records out 00:12:18.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0185499 s, 221 kB/s 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.990 23:53:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:18.990 23:53:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.990 23:53:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.990 23:53:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:19.248 /dev/nbd1 00:12:19.248 23:53:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.248 23:53:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:19.248 1+0 records in 00:12:19.248 1+0 records out 00:12:19.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223545 s, 18.3 MB/s 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.248 23:53:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:19.248 23:53:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.248 23:53:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.248 23:53:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:19.248 23:53:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.505 23:53:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:20.070 23:53:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:20.070 { 00:12:20.070 "nbd_device": "/dev/nbd0", 00:12:20.070 "bdev_name": "Malloc0" 00:12:20.070 }, 00:12:20.070 { 00:12:20.070 "nbd_device": "/dev/nbd1", 00:12:20.070 "bdev_name": "Malloc1" 00:12:20.070 } 00:12:20.070 ]' 00:12:20.070 23:53:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:20.070 { 00:12:20.070 "nbd_device": "/dev/nbd0", 00:12:20.070 "bdev_name": "Malloc0" 00:12:20.070 }, 00:12:20.070 { 00:12:20.070 "nbd_device": "/dev/nbd1", 00:12:20.070 "bdev_name": "Malloc1" 00:12:20.070 } 00:12:20.070 ]' 00:12:20.070 23:53:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:20.070 23:53:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:20.070 /dev/nbd1' 00:12:20.070 23:53:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:20.071 /dev/nbd1' 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:20.071 256+0 records in 00:12:20.071 256+0 records out 00:12:20.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00867993 s, 121 MB/s 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:20.071 256+0 records in 00:12:20.071 256+0 records out 00:12:20.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261271 s, 40.1 MB/s 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:20.071 256+0 records in 00:12:20.071 256+0 records out 00:12:20.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225134 s, 46.6 MB/s 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.071 23:53:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.637 23:53:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.203 23:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:21.768 23:54:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:21.768 23:54:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:22.026 23:54:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:22.284 [2024-12-09 23:54:00.751266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:22.542 [2024-12-09 23:54:00.840454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.542 [2024-12-09 23:54:00.840454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.542 [2024-12-09 23:54:00.896282] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:22.542 [2024-12-09 23:54:00.896348] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:25.088 23:54:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:25.088 23:54:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:25.088 spdk_app_start Round 1 00:12:25.088 23:54:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 470047 /var/tmp/spdk-nbd.sock 00:12:25.088 23:54:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 470047 ']' 00:12:25.088 23:54:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:25.088 23:54:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.088 23:54:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:25.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:25.088 23:54:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.088 23:54:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:25.346 23:54:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.346 23:54:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:25.346 23:54:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:25.911 Malloc0 00:12:25.911 23:54:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:26.169 Malloc1 00:12:26.169 23:54:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:26.169 23:54:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:26.170 23:54:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:26.170 23:54:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:26.170 23:54:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:26.170 23:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:26.170 23:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:26.170 23:54:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:26.429 /dev/nbd0 00:12:26.429 23:54:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:26.429 23:54:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:26.429 23:54:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:26.429 1+0 records in 00:12:26.429 1+0 records out 00:12:26.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229705 s, 17.8 MB/s 00:12:26.687 23:54:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:26.687 23:54:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:26.687 23:54:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:26.687 23:54:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:26.687 23:54:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:26.687 23:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.687 23:54:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:26.687 23:54:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:26.945 /dev/nbd1 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:26.945 1+0 records in 00:12:26.945 1+0 records out 00:12:26.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206808 s, 19.8 MB/s 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:26.945 23:54:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.945 23:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:27.203 23:54:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:27.203 { 00:12:27.203 "nbd_device": "/dev/nbd0", 00:12:27.203 "bdev_name": "Malloc0" 00:12:27.203 }, 00:12:27.203 { 00:12:27.203 "nbd_device": "/dev/nbd1", 00:12:27.203 "bdev_name": "Malloc1" 00:12:27.203 } 00:12:27.203 ]' 00:12:27.203 23:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:27.203 { 00:12:27.203 "nbd_device": "/dev/nbd0", 00:12:27.203 "bdev_name": "Malloc0" 00:12:27.203 }, 00:12:27.203 { 00:12:27.204 "nbd_device": "/dev/nbd1", 00:12:27.204 "bdev_name": "Malloc1" 00:12:27.204 } 00:12:27.204 ]' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:27.204 /dev/nbd1' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:27.204 /dev/nbd1' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:27.204 256+0 records in 00:12:27.204 256+0 records out 00:12:27.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531097 s, 197 MB/s 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:27.204 256+0 records in 00:12:27.204 256+0 records out 00:12:27.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209602 s, 50.0 MB/s 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:27.204 256+0 records in 00:12:27.204 256+0 records out 00:12:27.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229565 s, 45.7 MB/s 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:27.204 23:54:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:27.461 23:54:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:27.462 23:54:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.462 23:54:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:27.462 23:54:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.462 23:54:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:27.462 23:54:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.462 23:54:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.719 23:54:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.284 23:54:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:28.851 23:54:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:28.851 23:54:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:29.109 23:54:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:29.367 [2024-12-09 23:54:07.690743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:29.367 [2024-12-09 23:54:07.776672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.367 [2024-12-09 23:54:07.776676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.367 [2024-12-09 23:54:07.832446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:29.367 [2024-12-09 23:54:07.832508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:32.647 23:54:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:32.647 23:54:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:32.647 spdk_app_start Round 2 00:12:32.647 23:54:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 470047 /var/tmp/spdk-nbd.sock 00:12:32.647 23:54:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 470047 ']' 00:12:32.647 23:54:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:32.648 23:54:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.648 23:54:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:32.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:32.648 23:54:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.648 23:54:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:32.648 23:54:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.648 23:54:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:32.648 23:54:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:33.213 Malloc0 00:12:33.214 23:54:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:33.472 Malloc1 00:12:33.472 23:54:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.472 23:54:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:34.406 /dev/nbd0 00:12:34.406 23:54:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.406 23:54:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:34.406 1+0 records in 00:12:34.406 1+0 records out 00:12:34.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152822 s, 26.8 MB/s 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.406 23:54:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:34.406 23:54:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.406 23:54:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.406 23:54:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:34.971 /dev/nbd1 00:12:34.971 23:54:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:34.971 23:54:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:34.971 1+0 records in 00:12:34.971 1+0 records out 00:12:34.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198648 s, 20.6 MB/s 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.971 23:54:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:34.971 23:54:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.972 23:54:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.972 23:54:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:34.972 23:54:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.972 23:54:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:35.230 { 00:12:35.230 "nbd_device": "/dev/nbd0", 00:12:35.230 "bdev_name": "Malloc0" 00:12:35.230 }, 00:12:35.230 { 00:12:35.230 "nbd_device": "/dev/nbd1", 00:12:35.230 "bdev_name": "Malloc1" 00:12:35.230 } 00:12:35.230 ]' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:35.230 { 00:12:35.230 "nbd_device": "/dev/nbd0", 00:12:35.230 "bdev_name": "Malloc0" 00:12:35.230 }, 00:12:35.230 { 00:12:35.230 "nbd_device": "/dev/nbd1", 00:12:35.230 "bdev_name": "Malloc1" 00:12:35.230 } 00:12:35.230 ]' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:35.230 /dev/nbd1' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:35.230 /dev/nbd1' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:35.230 256+0 records in 00:12:35.230 256+0 records out 00:12:35.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468407 s, 224 MB/s 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:35.230 256+0 records in 00:12:35.230 256+0 records out 00:12:35.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212626 s, 49.3 MB/s 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.230 23:54:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:35.488 256+0 records in 00:12:35.488 256+0 records out 00:12:35.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230542 s, 45.5 MB/s 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.488 23:54:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.053 23:54:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.619 23:54:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.876 23:54:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.876 23:54:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:37.809 23:54:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:37.809 [2024-12-09 23:54:16.234026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.809 [2024-12-09 23:54:16.316421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.809 [2024-12-09 23:54:16.316424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.067 [2024-12-09 23:54:16.375222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:38.067 [2024-12-09 23:54:16.375288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:40.597 23:54:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 470047 /var/tmp/spdk-nbd.sock 00:12:40.597 23:54:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 470047 ']' 00:12:40.597 23:54:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:40.597 23:54:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.597 23:54:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:40.597 23:54:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.597 23:54:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:40.855 23:54:19 event.app_repeat -- event/event.sh@39 -- # killprocess 470047 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 470047 ']' 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 470047 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470047 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470047' 00:12:40.855 killing process with pid 470047 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 470047 00:12:40.855 23:54:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 470047 00:12:41.114 spdk_app_start is called in Round 0. 00:12:41.114 Shutdown signal received, stop current app iteration 00:12:41.114 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:12:41.114 spdk_app_start is called in Round 1. 00:12:41.114 Shutdown signal received, stop current app iteration 00:12:41.114 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:12:41.114 spdk_app_start is called in Round 2. 00:12:41.114 Shutdown signal received, stop current app iteration 00:12:41.114 Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 reinitialization... 00:12:41.114 spdk_app_start is called in Round 3. 00:12:41.114 Shutdown signal received, stop current app iteration 00:12:41.114 23:54:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:41.114 23:54:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:41.114 00:12:41.114 real 0m24.631s 00:12:41.114 user 0m56.928s 00:12:41.114 sys 0m4.959s 00:12:41.114 23:54:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.114 23:54:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:41.114 ************************************ 00:12:41.114 END TEST app_repeat 00:12:41.114 ************************************ 00:12:41.114 23:54:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:41.114 23:54:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh 00:12:41.114 23:54:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:41.114 23:54:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.114 23:54:19 event -- common/autotest_common.sh@10 -- # set +x 00:12:41.373 ************************************ 00:12:41.373 START TEST cpu_locks 00:12:41.373 ************************************ 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh 00:12:41.373 * Looking for test storage... 00:12:41.373 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.373 23:54:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:41.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.373 --rc genhtml_branch_coverage=1 00:12:41.373 --rc genhtml_function_coverage=1 00:12:41.373 --rc genhtml_legend=1 00:12:41.373 --rc geninfo_all_blocks=1 00:12:41.373 --rc geninfo_unexecuted_blocks=1 00:12:41.373 00:12:41.373 ' 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:41.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.373 --rc genhtml_branch_coverage=1 00:12:41.373 --rc genhtml_function_coverage=1 00:12:41.373 --rc genhtml_legend=1 00:12:41.373 --rc geninfo_all_blocks=1 00:12:41.373 --rc geninfo_unexecuted_blocks=1 00:12:41.373 00:12:41.373 ' 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:41.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.373 --rc genhtml_branch_coverage=1 00:12:41.373 --rc genhtml_function_coverage=1 00:12:41.373 --rc genhtml_legend=1 00:12:41.373 --rc geninfo_all_blocks=1 00:12:41.373 --rc geninfo_unexecuted_blocks=1 00:12:41.373 00:12:41.373 ' 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:41.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.373 --rc genhtml_branch_coverage=1 00:12:41.373 --rc genhtml_function_coverage=1 00:12:41.373 --rc genhtml_legend=1 00:12:41.373 --rc geninfo_all_blocks=1 00:12:41.373 --rc geninfo_unexecuted_blocks=1 00:12:41.373 00:12:41.373 ' 00:12:41.373 23:54:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:41.373 23:54:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:41.373 23:54:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:41.373 23:54:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.373 23:54:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.373 ************************************ 00:12:41.373 START TEST default_locks 00:12:41.373 ************************************ 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=473773 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 473773 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 473773 ']' 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.373 23:54:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.631 [2024-12-09 23:54:19.921408] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:41.631 [2024-12-09 23:54:19.921503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473773 ] 00:12:41.631 [2024-12-09 23:54:19.999551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.631 [2024-12-09 23:54:20.062665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.889 [2024-12-09 23:54:20.374930] 'OCF_Core' volume operations registered 00:12:41.889 [2024-12-09 23:54:20.374984] 'OCF_Cache' volume operations registered 00:12:41.889 [2024-12-09 23:54:20.381972] 'OCF Composite' volume operations registered 00:12:41.889 [2024-12-09 23:54:20.388500] 'SPDK_block_device' volume operations registered 00:12:42.147 23:54:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.147 23:54:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:12:42.147 23:54:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 473773 00:12:42.147 23:54:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 473773 00:12:42.147 23:54:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:43.524 lslocks: write error 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 473773 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 473773 ']' 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 473773 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 473773 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 473773' 00:12:43.524 killing process with pid 473773 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 473773 00:12:43.524 23:54:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 473773 00:12:44.089 23:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 473773 00:12:44.089 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:12:44.089 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 473773 00:12:44.089 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:44.089 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 473773 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 473773 ']' 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:44.090 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (473773) - No such process 00:12:44.090 ERROR: process (pid: 473773) is no longer running 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:44.090 00:12:44.090 real 0m2.588s 00:12:44.090 user 0m2.403s 00:12:44.090 sys 0m1.290s 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.090 23:54:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:44.090 ************************************ 00:12:44.090 END TEST default_locks 00:12:44.090 ************************************ 00:12:44.090 23:54:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:44.090 23:54:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.090 23:54:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.090 23:54:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:44.090 ************************************ 00:12:44.090 START TEST default_locks_via_rpc 00:12:44.090 ************************************ 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=474073 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 474073 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 474073 ']' 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.090 23:54:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.090 [2024-12-09 23:54:22.542132] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:44.090 [2024-12-09 23:54:22.542210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474073 ] 00:12:44.349 [2024-12-09 23:54:22.610576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.349 [2024-12-09 23:54:22.669155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.608 [2024-12-09 23:54:22.987869] 'OCF_Core' volume operations registered 00:12:44.608 [2024-12-09 23:54:22.987918] 'OCF_Cache' volume operations registered 00:12:44.608 [2024-12-09 23:54:22.995835] 'OCF Composite' volume operations registered 00:12:44.608 [2024-12-09 23:54:23.003278] 'SPDK_block_device' volume operations registered 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 474073 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 474073 00:12:44.866 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 474073 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 474073 ']' 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 474073 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474073 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474073' 00:12:45.432 killing process with pid 474073 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 474073 00:12:45.432 23:54:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 474073 00:12:46.367 00:12:46.367 real 0m2.057s 00:12:46.367 user 0m1.803s 00:12:46.367 sys 0m0.890s 00:12:46.367 23:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.367 23:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.367 ************************************ 00:12:46.367 END TEST default_locks_via_rpc 00:12:46.367 ************************************ 00:12:46.367 23:54:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:46.367 23:54:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.367 23:54:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.367 23:54:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:46.367 ************************************ 00:12:46.367 START TEST non_locking_app_on_locked_coremask 00:12:46.367 ************************************ 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=474360 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 474360 /var/tmp/spdk.sock 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 474360 ']' 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.367 23:54:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:46.367 [2024-12-09 23:54:24.652562] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:46.367 [2024-12-09 23:54:24.652641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474360 ] 00:12:46.367 [2024-12-09 23:54:24.717573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.367 [2024-12-09 23:54:24.776719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.626 [2024-12-09 23:54:25.098889] 'OCF_Core' volume operations registered 00:12:46.626 [2024-12-09 23:54:25.098985] 'OCF_Cache' volume operations registered 00:12:46.626 [2024-12-09 23:54:25.106894] 'OCF Composite' volume operations registered 00:12:46.626 [2024-12-09 23:54:25.113934] 'SPDK_block_device' volume operations registered 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=474378 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 474378 /var/tmp/spdk2.sock 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 474378 ']' 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:46.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.885 23:54:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:46.885 [2024-12-09 23:54:25.391751] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:46.885 [2024-12-09 23:54:25.391871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474378 ] 00:12:47.144 [2024-12-09 23:54:25.554784] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:47.144 [2024-12-09 23:54:25.554835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.402 [2024-12-09 23:54:25.736335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.968 [2024-12-09 23:54:26.416826] 'OCF_Core' volume operations registered 00:12:47.968 [2024-12-09 23:54:26.416873] 'OCF_Cache' volume operations registered 00:12:47.968 [2024-12-09 23:54:26.431872] 'OCF Composite' volume operations registered 00:12:47.968 [2024-12-09 23:54:26.446938] 'SPDK_block_device' volume operations registered 00:12:48.902 23:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.902 23:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:48.902 23:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 474360 00:12:48.902 23:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 474360 00:12:48.902 23:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:49.836 lslocks: write error 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 474360 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 474360 ']' 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 474360 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474360 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474360' 00:12:49.836 killing process with pid 474360 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 474360 00:12:49.836 23:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 474360 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 474378 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 474378 ']' 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 474378 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474378 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474378' 00:12:51.303 killing process with pid 474378 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 474378 00:12:51.303 23:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 474378 00:12:51.939 00:12:51.939 real 0m5.705s 00:12:51.939 user 0m5.506s 00:12:51.939 sys 0m2.024s 00:12:51.939 23:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.939 23:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 ************************************ 00:12:51.939 END TEST non_locking_app_on_locked_coremask 00:12:51.939 ************************************ 00:12:51.939 23:54:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:51.939 23:54:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.939 23:54:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.939 23:54:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 ************************************ 00:12:51.939 START TEST locking_app_on_unlocked_coremask 00:12:51.939 ************************************ 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=475063 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 475063 /var/tmp/spdk.sock 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 475063 ']' 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.939 23:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 [2024-12-09 23:54:30.430632] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:51.939 [2024-12-09 23:54:30.430740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475063 ] 00:12:52.239 [2024-12-09 23:54:30.534912] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:52.239 [2024-12-09 23:54:30.534992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.239 [2024-12-09 23:54:30.631071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.530 [2024-12-09 23:54:30.996440] 'OCF_Core' volume operations registered 00:12:52.530 [2024-12-09 23:54:30.996533] 'OCF_Cache' volume operations registered 00:12:52.530 [2024-12-09 23:54:31.004876] 'OCF Composite' volume operations registered 00:12:52.530 [2024-12-09 23:54:31.013035] 'SPDK_block_device' volume operations registered 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=475084 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 475084 /var/tmp/spdk2.sock 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 475084 ']' 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.816 23:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 [2024-12-09 23:54:31.358405] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:53.074 [2024-12-09 23:54:31.358507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475084 ] 00:12:53.074 [2024-12-09 23:54:31.531005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.334 [2024-12-09 23:54:31.707236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.904 [2024-12-09 23:54:32.333988] 'OCF_Core' volume operations registered 00:12:53.904 [2024-12-09 23:54:32.334035] 'OCF_Cache' volume operations registered 00:12:53.904 [2024-12-09 23:54:32.349144] 'OCF Composite' volume operations registered 00:12:53.904 [2024-12-09 23:54:32.364150] 'SPDK_block_device' volume operations registered 00:12:54.473 23:54:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.473 23:54:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:54.473 23:54:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 475084 00:12:54.473 23:54:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 475084 00:12:54.473 23:54:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:57.005 lslocks: write error 00:12:57.005 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 475063 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 475063 ']' 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 475063 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475063 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475063' 00:12:57.006 killing process with pid 475063 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 475063 00:12:57.006 23:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 475063 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 475084 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 475084 ']' 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 475084 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475084 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475084' 00:12:58.381 killing process with pid 475084 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 475084 00:12:58.381 23:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 475084 00:12:58.947 00:12:58.947 real 0m6.984s 00:12:58.947 user 0m7.273s 00:12:58.947 sys 0m2.687s 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:58.947 ************************************ 00:12:58.947 END TEST locking_app_on_unlocked_coremask 00:12:58.947 ************************************ 00:12:58.947 23:54:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:58.947 23:54:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.947 23:54:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.947 23:54:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:58.947 ************************************ 00:12:58.947 START TEST locking_app_on_locked_coremask 00:12:58.947 ************************************ 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=475893 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 475893 /var/tmp/spdk.sock 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 475893 ']' 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.947 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.948 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.948 23:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:58.948 [2024-12-09 23:54:37.460486] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:58.948 [2024-12-09 23:54:37.460564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475893 ] 00:12:59.206 [2024-12-09 23:54:37.532898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.206 [2024-12-09 23:54:37.591239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.465 [2024-12-09 23:54:37.923840] 'OCF_Core' volume operations registered 00:12:59.465 [2024-12-09 23:54:37.923933] 'OCF_Cache' volume operations registered 00:12:59.465 [2024-12-09 23:54:37.932401] 'OCF Composite' volume operations registered 00:12:59.465 [2024-12-09 23:54:37.940820] 'SPDK_block_device' volume operations registered 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=475906 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 475906 /var/tmp/spdk2.sock 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 475906 /var/tmp/spdk2.sock 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 475906 /var/tmp/spdk2.sock 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 475906 ']' 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:59.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.724 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:59.724 [2024-12-09 23:54:38.241407] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:12:59.724 [2024-12-09 23:54:38.241497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475906 ] 00:12:59.984 [2024-12-09 23:54:38.345363] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 475893 has claimed it. 00:12:59.984 [2024-12-09 23:54:38.345417] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:00.552 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (475906) - No such process 00:13:00.552 ERROR: process (pid: 475906) is no longer running 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 475893 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 475893 00:13:00.552 23:54:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:01.929 lslocks: write error 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 475893 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 475893 ']' 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 475893 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475893 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475893' 00:13:01.929 killing process with pid 475893 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 475893 00:13:01.929 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 475893 00:13:02.495 00:13:02.495 real 0m3.420s 00:13:02.495 user 0m3.431s 00:13:02.495 sys 0m1.427s 00:13:02.495 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.495 23:54:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.495 ************************************ 00:13:02.495 END TEST locking_app_on_locked_coremask 00:13:02.495 ************************************ 00:13:02.495 23:54:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:02.495 23:54:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.495 23:54:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.495 23:54:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.495 ************************************ 00:13:02.495 START TEST locking_overlapped_coremask 00:13:02.495 ************************************ 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=476322 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 476322 /var/tmp/spdk.sock 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 476322 ']' 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.495 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.496 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.496 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.496 23:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.496 [2024-12-09 23:54:40.966028] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:02.496 [2024-12-09 23:54:40.966143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476322 ] 00:13:02.753 [2024-12-09 23:54:41.071906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.754 [2024-12-09 23:54:41.170234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.754 [2024-12-09 23:54:41.170316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.754 [2024-12-09 23:54:41.170320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.012 [2024-12-09 23:54:41.407888] 'OCF_Core' volume operations registered 00:13:03.012 [2024-12-09 23:54:41.407941] 'OCF_Cache' volume operations registered 00:13:03.012 [2024-12-09 23:54:41.412567] 'OCF Composite' volume operations registered 00:13:03.012 [2024-12-09 23:54:41.417210] 'SPDK_block_device' volume operations registered 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=476333 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 476333 /var/tmp/spdk2.sock 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 476333 /var/tmp/spdk2.sock 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 476333 /var/tmp/spdk2.sock 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 476333 ']' 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:03.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.271 23:54:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:03.271 [2024-12-09 23:54:41.654235] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:03.271 [2024-12-09 23:54:41.654343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476333 ] 00:13:03.271 [2024-12-09 23:54:41.772693] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 476322 has claimed it. 00:13:03.271 [2024-12-09 23:54:41.772745] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:04.211 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (476333) - No such process 00:13:04.211 ERROR: process (pid: 476333) is no longer running 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 476322 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 476322 ']' 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 476322 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476322 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476322' 00:13:04.212 killing process with pid 476322 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 476322 00:13:04.212 23:54:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 476322 00:13:04.780 00:13:04.780 real 0m2.325s 00:13:04.780 user 0m6.397s 00:13:04.780 sys 0m0.640s 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:04.780 ************************************ 00:13:04.780 END TEST locking_overlapped_coremask 00:13:04.780 ************************************ 00:13:04.780 23:54:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:04.780 23:54:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:04.780 23:54:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.780 23:54:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:04.780 ************************************ 00:13:04.780 START TEST locking_overlapped_coremask_via_rpc 00:13:04.780 ************************************ 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=476618 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 476618 /var/tmp/spdk.sock 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 476618 ']' 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.780 23:54:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.041 [2024-12-09 23:54:43.404533] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:05.041 [2024-12-09 23:54:43.404684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476618 ] 00:13:05.041 [2024-12-09 23:54:43.511641] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:05.041 [2024-12-09 23:54:43.511683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.301 [2024-12-09 23:54:43.581946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.301 [2024-12-09 23:54:43.582001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.301 [2024-12-09 23:54:43.582005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.560 [2024-12-09 23:54:43.824770] 'OCF_Core' volume operations registered 00:13:05.560 [2024-12-09 23:54:43.824825] 'OCF_Cache' volume operations registered 00:13:05.560 [2024-12-09 23:54:43.829393] 'OCF Composite' volume operations registered 00:13:05.560 [2024-12-09 23:54:43.833991] 'SPDK_block_device' volume operations registered 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=476754 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 476754 /var/tmp/spdk2.sock 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 476754 ']' 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:06.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.128 23:54:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.128 [2024-12-09 23:54:44.518590] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:06.128 [2024-12-09 23:54:44.518691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476754 ] 00:13:06.128 [2024-12-09 23:54:44.629971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:06.128 [2024-12-09 23:54:44.630016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.388 [2024-12-09 23:54:44.751602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.388 [2024-12-09 23:54:44.754799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:06.388 [2024-12-09 23:54:44.754801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.957 [2024-12-09 23:54:45.239986] 'OCF_Core' volume operations registered 00:13:06.957 [2024-12-09 23:54:45.240053] 'OCF_Cache' volume operations registered 00:13:06.957 [2024-12-09 23:54:45.248442] 'OCF Composite' volume operations registered 00:13:06.957 [2024-12-09 23:54:45.256943] 'SPDK_block_device' volume operations registered 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.528 [2024-12-09 23:54:45.775917] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 476618 has claimed it. 00:13:07.528 request: 00:13:07.528 { 00:13:07.528 "method": "framework_enable_cpumask_locks", 00:13:07.528 "req_id": 1 00:13:07.528 } 00:13:07.528 Got JSON-RPC error response 00:13:07.528 response: 00:13:07.528 { 00:13:07.528 "code": -32603, 00:13:07.528 "message": "Failed to claim CPU core: 2" 00:13:07.528 } 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 476618 /var/tmp/spdk.sock 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 476618 ']' 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.528 23:54:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 476754 /var/tmp/spdk2.sock 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 476754 ']' 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:08.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.096 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:08.356 00:13:08.356 real 0m3.443s 00:13:08.356 user 0m1.883s 00:13:08.356 sys 0m0.270s 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.356 23:54:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.356 ************************************ 00:13:08.356 END TEST locking_overlapped_coremask_via_rpc 00:13:08.356 ************************************ 00:13:08.356 23:54:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:08.356 23:54:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 476618 ]] 00:13:08.356 23:54:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 476618 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 476618 ']' 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 476618 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476618 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476618' 00:13:08.356 killing process with pid 476618 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 476618 00:13:08.356 23:54:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 476618 00:13:09.293 23:54:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 476754 ]] 00:13:09.293 23:54:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 476754 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 476754 ']' 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 476754 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476754 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476754' 00:13:09.293 killing process with pid 476754 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 476754 00:13:09.293 23:54:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 476754 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 476618 ]] 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 476618 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 476618 ']' 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 476618 00:13:09.861 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (476618) - No such process 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 476618 is not found' 00:13:09.861 Process with pid 476618 is not found 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 476754 ]] 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 476754 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 476754 ']' 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 476754 00:13:09.861 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (476754) - No such process 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 476754 is not found' 00:13:09.861 Process with pid 476754 is not found 00:13:09.861 23:54:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:09.861 00:13:09.861 real 0m28.571s 00:13:09.861 user 0m47.423s 00:13:09.861 sys 0m10.624s 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.861 23:54:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:09.861 ************************************ 00:13:09.861 END TEST cpu_locks 00:13:09.861 ************************************ 00:13:09.861 00:13:09.861 real 0m59.452s 00:13:09.861 user 1m53.344s 00:13:09.861 sys 0m16.707s 00:13:09.861 23:54:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.861 23:54:48 event -- common/autotest_common.sh@10 -- # set +x 00:13:09.861 ************************************ 00:13:09.861 END TEST event 00:13:09.861 ************************************ 00:13:09.861 23:54:48 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh 00:13:09.861 23:54:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.861 23:54:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.861 23:54:48 -- common/autotest_common.sh@10 -- # set +x 00:13:09.861 ************************************ 00:13:09.861 START TEST thread 00:13:09.861 ************************************ 00:13:09.861 23:54:48 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh 00:13:09.861 * Looking for test storage... 00:13:09.861 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread 00:13:09.861 23:54:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:09.861 23:54:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:13:09.861 23:54:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:10.121 23:54:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.121 23:54:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.121 23:54:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.121 23:54:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.121 23:54:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.121 23:54:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.121 23:54:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.121 23:54:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.121 23:54:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.121 23:54:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.121 23:54:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.121 23:54:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:10.121 23:54:48 thread -- scripts/common.sh@345 -- # : 1 00:13:10.121 23:54:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.121 23:54:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.121 23:54:48 thread -- scripts/common.sh@365 -- # decimal 1 00:13:10.121 23:54:48 thread -- scripts/common.sh@353 -- # local d=1 00:13:10.121 23:54:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.121 23:54:48 thread -- scripts/common.sh@355 -- # echo 1 00:13:10.121 23:54:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.121 23:54:48 thread -- scripts/common.sh@366 -- # decimal 2 00:13:10.121 23:54:48 thread -- scripts/common.sh@353 -- # local d=2 00:13:10.121 23:54:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.121 23:54:48 thread -- scripts/common.sh@355 -- # echo 2 00:13:10.121 23:54:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.121 23:54:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.121 23:54:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.121 23:54:48 thread -- scripts/common.sh@368 -- # return 0 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:10.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.121 --rc genhtml_branch_coverage=1 00:13:10.121 --rc genhtml_function_coverage=1 00:13:10.121 --rc genhtml_legend=1 00:13:10.121 --rc geninfo_all_blocks=1 00:13:10.121 --rc geninfo_unexecuted_blocks=1 00:13:10.121 00:13:10.121 ' 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:10.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.121 --rc genhtml_branch_coverage=1 00:13:10.121 --rc genhtml_function_coverage=1 00:13:10.121 --rc genhtml_legend=1 00:13:10.121 --rc geninfo_all_blocks=1 00:13:10.121 --rc geninfo_unexecuted_blocks=1 00:13:10.121 00:13:10.121 ' 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:10.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.121 --rc genhtml_branch_coverage=1 00:13:10.121 --rc genhtml_function_coverage=1 00:13:10.121 --rc genhtml_legend=1 00:13:10.121 --rc geninfo_all_blocks=1 00:13:10.121 --rc geninfo_unexecuted_blocks=1 00:13:10.121 00:13:10.121 ' 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:10.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.121 --rc genhtml_branch_coverage=1 00:13:10.121 --rc genhtml_function_coverage=1 00:13:10.121 --rc genhtml_legend=1 00:13:10.121 --rc geninfo_all_blocks=1 00:13:10.121 --rc geninfo_unexecuted_blocks=1 00:13:10.121 00:13:10.121 ' 00:13:10.121 23:54:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.121 23:54:48 thread -- common/autotest_common.sh@10 -- # set +x 00:13:10.121 ************************************ 00:13:10.121 START TEST thread_poller_perf 00:13:10.121 ************************************ 00:13:10.121 23:54:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:10.121 [2024-12-09 23:54:48.587753] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:10.122 [2024-12-09 23:54:48.587857] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477257 ] 00:13:10.382 [2024-12-09 23:54:48.687995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.382 [2024-12-09 23:54:48.753701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.382 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:11.763 [2024-12-09T22:54:50.283Z] ====================================== 00:13:11.763 [2024-12-09T22:54:50.283Z] busy:2707419570 (cyc) 00:13:11.763 [2024-12-09T22:54:50.283Z] total_run_count: 172000 00:13:11.763 [2024-12-09T22:54:50.283Z] tsc_hz: 2700000000 (cyc) 00:13:11.763 [2024-12-09T22:54:50.283Z] ====================================== 00:13:11.763 [2024-12-09T22:54:50.283Z] poller_cost: 15740 (cyc), 5829 (nsec) 00:13:11.763 00:13:11.763 real 0m1.278s 00:13:11.763 user 0m1.177s 00:13:11.763 sys 0m0.090s 00:13:11.763 23:54:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.763 23:54:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:11.763 ************************************ 00:13:11.763 END TEST thread_poller_perf 00:13:11.763 ************************************ 00:13:11.763 23:54:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:11.763 23:54:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:11.763 23:54:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.763 23:54:49 thread -- common/autotest_common.sh@10 -- # set +x 00:13:11.763 ************************************ 00:13:11.763 START TEST thread_poller_perf 00:13:11.763 ************************************ 00:13:11.763 23:54:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:11.763 [2024-12-09 23:54:49.927497] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:11.764 [2024-12-09 23:54:49.927641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477515 ] 00:13:11.764 [2024-12-09 23:54:50.060335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.764 [2024-12-09 23:54:50.160669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.764 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:13.144 [2024-12-09T22:54:51.664Z] ====================================== 00:13:13.144 [2024-12-09T22:54:51.664Z] busy:2705672295 (cyc) 00:13:13.144 [2024-12-09T22:54:51.664Z] total_run_count: 2149000 00:13:13.144 [2024-12-09T22:54:51.664Z] tsc_hz: 2700000000 (cyc) 00:13:13.144 [2024-12-09T22:54:51.664Z] ====================================== 00:13:13.144 [2024-12-09T22:54:51.664Z] poller_cost: 1259 (cyc), 466 (nsec) 00:13:13.144 00:13:13.144 real 0m1.351s 00:13:13.144 user 0m1.219s 00:13:13.144 sys 0m0.120s 00:13:13.144 23:54:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.144 23:54:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:13.144 ************************************ 00:13:13.144 END TEST thread_poller_perf 00:13:13.144 ************************************ 00:13:13.144 23:54:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:13.144 00:13:13.144 real 0m2.997s 00:13:13.144 user 0m2.626s 00:13:13.144 sys 0m0.363s 00:13:13.144 23:54:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.144 23:54:51 thread -- common/autotest_common.sh@10 -- # set +x 00:13:13.144 ************************************ 00:13:13.144 END TEST thread 00:13:13.144 ************************************ 00:13:13.144 23:54:51 -- spdk/autotest.sh@171 -- # [[ 1 -eq 1 ]] 00:13:13.144 23:54:51 -- spdk/autotest.sh@172 -- # run_test accel /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh 00:13:13.144 23:54:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:13.144 23:54:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.144 23:54:51 -- common/autotest_common.sh@10 -- # set +x 00:13:13.144 ************************************ 00:13:13.144 START TEST accel 00:13:13.144 ************************************ 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh 00:13:13.144 * Looking for test storage... 00:13:13.144 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1711 -- # lcov --version 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:13.144 23:54:51 accel -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.144 23:54:51 accel -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.144 23:54:51 accel -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.144 23:54:51 accel -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.144 23:54:51 accel -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.144 23:54:51 accel -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.144 23:54:51 accel -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.144 23:54:51 accel -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.144 23:54:51 accel -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.144 23:54:51 accel -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.144 23:54:51 accel -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.144 23:54:51 accel -- scripts/common.sh@344 -- # case "$op" in 00:13:13.144 23:54:51 accel -- scripts/common.sh@345 -- # : 1 00:13:13.144 23:54:51 accel -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.144 23:54:51 accel -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.144 23:54:51 accel -- scripts/common.sh@365 -- # decimal 1 00:13:13.144 23:54:51 accel -- scripts/common.sh@353 -- # local d=1 00:13:13.144 23:54:51 accel -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.144 23:54:51 accel -- scripts/common.sh@355 -- # echo 1 00:13:13.144 23:54:51 accel -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.144 23:54:51 accel -- scripts/common.sh@366 -- # decimal 2 00:13:13.144 23:54:51 accel -- scripts/common.sh@353 -- # local d=2 00:13:13.144 23:54:51 accel -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.144 23:54:51 accel -- scripts/common.sh@355 -- # echo 2 00:13:13.144 23:54:51 accel -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.144 23:54:51 accel -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.144 23:54:51 accel -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.144 23:54:51 accel -- scripts/common.sh@368 -- # return 0 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:13.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.144 --rc genhtml_branch_coverage=1 00:13:13.144 --rc genhtml_function_coverage=1 00:13:13.144 --rc genhtml_legend=1 00:13:13.144 --rc geninfo_all_blocks=1 00:13:13.144 --rc geninfo_unexecuted_blocks=1 00:13:13.144 00:13:13.144 ' 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:13.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.144 --rc genhtml_branch_coverage=1 00:13:13.144 --rc genhtml_function_coverage=1 00:13:13.144 --rc genhtml_legend=1 00:13:13.144 --rc geninfo_all_blocks=1 00:13:13.144 --rc geninfo_unexecuted_blocks=1 00:13:13.144 00:13:13.144 ' 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:13.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.144 --rc genhtml_branch_coverage=1 00:13:13.144 --rc genhtml_function_coverage=1 00:13:13.144 --rc genhtml_legend=1 00:13:13.144 --rc geninfo_all_blocks=1 00:13:13.144 --rc geninfo_unexecuted_blocks=1 00:13:13.144 00:13:13.144 ' 00:13:13.144 23:54:51 accel -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:13.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.144 --rc genhtml_branch_coverage=1 00:13:13.144 --rc genhtml_function_coverage=1 00:13:13.144 --rc genhtml_legend=1 00:13:13.144 --rc geninfo_all_blocks=1 00:13:13.144 --rc geninfo_unexecuted_blocks=1 00:13:13.144 00:13:13.144 ' 00:13:13.144 23:54:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:13:13.144 23:54:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:13:13.144 23:54:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:13.145 23:54:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=477732 00:13:13.145 23:54:51 accel -- accel/accel.sh@63 -- # waitforlisten 477732 00:13:13.145 23:54:51 accel -- common/autotest_common.sh@835 -- # '[' -z 477732 ']' 00:13:13.145 23:54:51 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:13.145 23:54:51 accel -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.145 23:54:51 accel -- accel/accel.sh@61 -- # build_accel_config 00:13:13.145 23:54:51 accel -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.145 23:54:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.145 23:54:51 accel -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.145 23:54:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.145 23:54:51 accel -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.145 23:54:51 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.145 23:54:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.145 23:54:51 accel -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:13.145 23:54:51 accel -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:13.145 23:54:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.145 23:54:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:13.145 23:54:51 accel -- accel/accel.sh@41 -- # jq -r . 00:13:13.145 [2024-12-09 23:54:51.576824] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:13.145 [2024-12-09 23:54:51.576921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477732 ] 00:13:13.404 [2024-12-09 23:54:51.672880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.404 [2024-12-09 23:54:51.746577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.404 [2024-12-09 23:54:51.751100] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:15.336 [2024-12-09 23:54:53.325085] 'OCF_Core' volume operations registered 00:13:15.336 [2024-12-09 23:54:53.325121] 'OCF_Cache' volume operations registered 00:13:15.336 [2024-12-09 23:54:53.330346] 'OCF Composite' volume operations registered 00:13:15.336 [2024-12-09 23:54:53.336036] 'SPDK_block_device' volume operations registered 00:13:15.336 23:54:53 accel -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.336 23:54:53 accel -- common/autotest_common.sh@868 -- # return 0 00:13:15.336 23:54:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:15.336 23:54:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:15.336 23:54:53 accel -- accel/accel.sh@67 -- # [[ 1 -gt 0 ]] 00:13:15.336 23:54:53 accel -- accel/accel.sh@67 -- # check_save_config ioat_scan_accel_module 00:13:15.336 23:54:53 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:13:15.336 23:54:53 accel -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.336 23:54:53 accel -- common/autotest_common.sh@10 -- # set +x 00:13:15.336 23:54:53 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:13:15.336 23:54:53 accel -- accel/accel.sh@56 -- # grep ioat_scan_accel_module 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.596 "method": "ioat_scan_accel_module" 00:13:15.596 23:54:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:15.596 23:54:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:15.596 23:54:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:15.596 23:54:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@10 -- # set +x 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=ioat 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=ioat 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # IFS== 00:13:15.596 23:54:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:15.596 23:54:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:15.596 23:54:53 accel -- accel/accel.sh@75 -- # killprocess 477732 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@954 -- # '[' -z 477732 ']' 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@958 -- # kill -0 477732 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@959 -- # uname 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.596 23:54:53 accel -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 477732 00:13:15.596 23:54:54 accel -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.596 23:54:54 accel -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.596 23:54:54 accel -- common/autotest_common.sh@972 -- # echo 'killing process with pid 477732' 00:13:15.596 killing process with pid 477732 00:13:15.596 23:54:54 accel -- common/autotest_common.sh@973 -- # kill 477732 00:13:15.596 23:54:54 accel -- common/autotest_common.sh@978 -- # wait 477732 00:13:16.973 23:54:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:13:16.973 23:54:55 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:16.973 23:54:55 accel -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.973 23:54:55 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.973 23:54:55 accel -- common/autotest_common.sh@10 -- # set +x 00:13:16.973 23:54:55 accel.accel_help -- common/autotest_common.sh@1129 -- # accel_perf -h 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:13:16.973 23:54:55 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:13:16.973 23:54:55 accel.accel_help -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.973 23:54:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:13:16.973 23:54:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:16.973 23:54:55 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:16.973 23:54:55 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.973 23:54:55 accel -- common/autotest_common.sh@10 -- # set +x 00:13:16.973 ************************************ 00:13:16.973 START TEST accel_missing_filename 00:13:16.973 ************************************ 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w compress 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # local es=0 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # type -t accel_perf 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.973 23:54:55 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w compress 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:13:16.973 23:54:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:13:16.973 [2024-12-09 23:54:55.234688] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:16.973 [2024-12-09 23:54:55.234738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478186 ] 00:13:16.973 [2024-12-09 23:54:55.342872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.973 [2024-12-09 23:54:55.456028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.973 [2024-12-09 23:54:55.460894] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:17.546 [2024-12-09 23:54:56.040668] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.805 [2024-12-09 23:54:56.149090] accel_perf.c:1546:main: *ERROR*: ERROR starting application 00:13:17.805 A filename is required. 00:13:17.805 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # es=234 00:13:17.805 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.805 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@664 -- # es=106 00:13:17.806 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@665 -- # case "$es" in 00:13:17.806 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@672 -- # es=1 00:13:17.806 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.806 00:13:17.806 real 0m1.038s 00:13:17.806 user 0m0.604s 00:13:17.806 sys 0m0.268s 00:13:17.806 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.806 23:54:56 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:13:17.806 ************************************ 00:13:17.806 END TEST accel_missing_filename 00:13:17.806 ************************************ 00:13:17.806 23:54:56 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:17.806 23:54:56 accel -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:13:17.806 23:54:56 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.806 23:54:56 accel -- common/autotest_common.sh@10 -- # set +x 00:13:17.806 ************************************ 00:13:17.806 START TEST accel_compress_verify 00:13:17.806 ************************************ 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # local es=0 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # type -t accel_perf 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.806 23:54:56 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:17.806 23:54:56 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:13:18.065 [2024-12-09 23:54:56.336194] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:18.065 [2024-12-09 23:54:56.336335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478350 ] 00:13:18.065 [2024-12-09 23:54:56.472028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.065 [2024-12-09 23:54:56.579184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.065 [2024-12-09 23:54:56.583640] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:19.000 [2024-12-09 23:54:57.196614] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:19.001 [2024-12-09 23:54:57.305953] accel_perf.c:1546:main: *ERROR*: ERROR starting application 00:13:19.001 00:13:19.001 Compression does not support the verify option, aborting. 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # es=161 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@664 -- # es=33 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@665 -- # case "$es" in 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@672 -- # es=1 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.001 00:13:19.001 real 0m1.091s 00:13:19.001 user 0m0.642s 00:13:19.001 sys 0m0.308s 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.001 23:54:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:13:19.001 ************************************ 00:13:19.001 END TEST accel_compress_verify 00:13:19.001 ************************************ 00:13:19.001 23:54:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:19.001 23:54:57 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:19.001 23:54:57 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.001 23:54:57 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.001 ************************************ 00:13:19.001 START TEST accel_wrong_workload 00:13:19.001 ************************************ 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w foobar 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # local es=0 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # type -t accel_perf 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w foobar 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:13:19.001 23:54:57 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:13:19.001 Unsupported workload type: foobar 00:13:19.001 [2024-12-09 23:54:57.474067] app.c:1466:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:19.001 accel_perf options: 00:13:19.001 [-h help message] 00:13:19.001 [-q queue depth per core] 00:13:19.001 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:19.001 [-T number of threads per core 00:13:19.001 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:19.001 [-t time in seconds] 00:13:19.001 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:19.001 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy, dix_generate, dix_verify 00:13:19.001 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:19.001 [-l for compress/decompress workloads, name of uncompressed input file 00:13:19.001 [-S for crc32c workload, use this seed value (default 0) 00:13:19.001 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:19.001 [-f for fill workload, use this BYTE value (default 255) 00:13:19.001 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:19.001 [-y verify result if this switch is on] 00:13:19.001 [-a tasks to allocate per core (default: same value as -q)] 00:13:19.001 Can be used to spread operations across a wider range of memory. 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@655 -- # es=1 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.001 00:13:19.001 real 0m0.024s 00:13:19.001 user 0m0.013s 00:13:19.001 sys 0m0.010s 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.001 23:54:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:13:19.001 ************************************ 00:13:19.001 END TEST accel_wrong_workload 00:13:19.001 ************************************ 00:13:19.001 23:54:57 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:19.001 23:54:57 accel -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:13:19.001 23:54:57 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.001 Error: writing output failed: Broken pipe 00:13:19.001 23:54:57 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.260 ************************************ 00:13:19.260 START TEST accel_negative_buffers 00:13:19.260 ************************************ 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@1129 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # local es=0 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@654 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # local arg=accel_perf 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # type -t accel_perf 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.260 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@655 -- # accel_perf -t 1 -w xor -y -x -1 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:13:19.260 23:54:57 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:13:19.260 -x option must be non-negative. 00:13:19.261 [2024-12-09 23:54:57.566046] app.c:1466:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:19.261 accel_perf options: 00:13:19.261 [-h help message] 00:13:19.261 [-q queue depth per core] 00:13:19.261 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:19.261 [-T number of threads per core 00:13:19.261 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:19.261 [-t time in seconds] 00:13:19.261 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:19.261 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy, dix_generate, dix_verify 00:13:19.261 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:19.261 [-l for compress/decompress workloads, name of uncompressed input file 00:13:19.261 [-S for crc32c workload, use this seed value (default 0) 00:13:19.261 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:19.261 [-f for fill workload, use this BYTE value (default 255) 00:13:19.261 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:19.261 [-y verify result if this switch is on] 00:13:19.261 [-a tasks to allocate per core (default: same value as -q)] 00:13:19.261 Can be used to spread operations across a wider range of memory. 00:13:19.261 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@655 -- # es=1 00:13:19.261 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.261 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.261 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.261 00:13:19.261 real 0m0.038s 00:13:19.261 user 0m0.024s 00:13:19.261 sys 0m0.014s 00:13:19.261 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.261 23:54:57 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:13:19.261 ************************************ 00:13:19.261 END TEST accel_negative_buffers 00:13:19.261 ************************************ 00:13:19.261 Error: writing output failed: Broken pipe 00:13:19.261 23:54:57 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:19.261 23:54:57 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:19.261 23:54:57 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.261 23:54:57 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.261 ************************************ 00:13:19.261 START TEST accel_crc32c 00:13:19.261 ************************************ 00:13:19.261 23:54:57 accel.accel_crc32c -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:19.261 23:54:57 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:19.261 [2024-12-09 23:54:57.653326] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:19.261 [2024-12-09 23:54:57.653462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478549 ] 00:13:19.520 [2024-12-09 23:54:57.787441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.520 [2024-12-09 23:54:57.873861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.520 [2024-12-09 23:54:57.878355] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.102 23:54:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:21.491 23:54:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:21.491 00:13:21.491 real 0m2.028s 00:13:21.491 user 0m0.010s 00:13:21.491 sys 0m0.002s 00:13:21.491 23:54:59 accel.accel_crc32c -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.491 23:54:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:21.491 ************************************ 00:13:21.491 END TEST accel_crc32c 00:13:21.491 ************************************ 00:13:21.491 23:54:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:21.491 23:54:59 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:21.491 23:54:59 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.491 23:54:59 accel -- common/autotest_common.sh@10 -- # set +x 00:13:21.491 ************************************ 00:13:21.491 START TEST accel_crc32c_C2 00:13:21.491 ************************************ 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:21.491 23:54:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:21.491 [2024-12-09 23:54:59.714684] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:21.491 [2024-12-09 23:54:59.714738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478841 ] 00:13:21.491 [2024-12-09 23:54:59.833031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.491 [2024-12-09 23:54:59.929570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.491 [2024-12-09 23:54:59.934248] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.057 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.058 23:55:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:23.429 00:13:23.429 real 0m2.014s 00:13:23.429 user 0m1.571s 00:13:23.429 sys 0m0.246s 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.429 23:55:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:23.429 ************************************ 00:13:23.429 END TEST accel_crc32c_C2 00:13:23.429 ************************************ 00:13:23.429 23:55:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:23.430 23:55:01 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:23.430 23:55:01 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.430 23:55:01 accel -- common/autotest_common.sh@10 -- # set +x 00:13:23.430 ************************************ 00:13:23.430 START TEST accel_copy 00:13:23.430 ************************************ 00:13:23.430 23:55:01 accel.accel_copy -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w copy -y 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:23.430 23:55:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:13:23.430 [2024-12-09 23:55:01.817371] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:23.430 [2024-12-09 23:55:01.817510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479119 ] 00:13:23.430 [2024-12-09 23:55:01.947914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.281 [2024-12-09 23:55:02.037595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.281 [2024-12-09 23:55:02.042316] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=ioat 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=ioat 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.281 23:55:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n ioat ]] 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:13:25.653 23:55:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ ioat == \i\o\a\t ]] 00:13:25.653 00:13:25.653 real 0m2.090s 00:13:25.653 user 0m1.587s 00:13:25.653 sys 0m0.306s 00:13:25.653 23:55:03 accel.accel_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.653 23:55:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.653 ************************************ 00:13:25.653 END TEST accel_copy 00:13:25.653 ************************************ 00:13:25.653 23:55:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:25.653 23:55:03 accel -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:25.653 23:55:03 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.653 23:55:03 accel -- common/autotest_common.sh@10 -- # set +x 00:13:25.653 ************************************ 00:13:25.653 START TEST accel_fill 00:13:25.653 ************************************ 00:13:25.653 23:55:03 accel.accel_fill -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:13:25.653 23:55:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:13:25.653 [2024-12-09 23:55:03.935591] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:25.654 [2024-12-09 23:55:03.935644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479388 ] 00:13:25.654 [2024-12-09 23:55:04.049207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.654 [2024-12-09 23:55:04.159875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.654 [2024-12-09 23:55:04.164783] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=ioat 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=ioat 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.587 23:55:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n ioat ]] 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:13:27.520 23:55:05 accel.accel_fill -- accel/accel.sh@27 -- # [[ ioat == \i\o\a\t ]] 00:13:27.520 00:13:27.520 real 0m2.040s 00:13:27.520 user 0m0.009s 00:13:27.520 sys 0m0.003s 00:13:27.520 23:55:05 accel.accel_fill -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.520 23:55:05 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:13:27.520 ************************************ 00:13:27.520 END TEST accel_fill 00:13:27.520 ************************************ 00:13:27.520 23:55:05 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:27.520 23:55:05 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:27.520 23:55:05 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.520 23:55:05 accel -- common/autotest_common.sh@10 -- # set +x 00:13:27.520 ************************************ 00:13:27.520 START TEST accel_copy_crc32c 00:13:27.520 ************************************ 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w copy_crc32c -y 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:27.520 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:27.520 [2024-12-09 23:55:06.038469] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:27.520 [2024-12-09 23:55:06.038528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479588 ] 00:13:27.778 [2024-12-09 23:55:06.160335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.778 [2024-12-09 23:55:06.249478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.778 [2024-12-09 23:55:06.254127] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:28.712 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:28.713 23:55:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:29.646 00:13:29.646 real 0m2.070s 00:13:29.646 user 0m0.011s 00:13:29.646 sys 0m0.001s 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.646 23:55:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:29.646 ************************************ 00:13:29.646 END TEST accel_copy_crc32c 00:13:29.646 ************************************ 00:13:29.646 23:55:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:13:29.646 23:55:08 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:29.646 23:55:08 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.646 23:55:08 accel -- common/autotest_common.sh@10 -- # set +x 00:13:29.646 ************************************ 00:13:29.646 START TEST accel_copy_crc32c_C2 00:13:29.646 ************************************ 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:29.646 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:29.646 [2024-12-09 23:55:08.144544] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:29.646 [2024-12-09 23:55:08.144598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479862 ] 00:13:29.905 [2024-12-09 23:55:08.255414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.905 [2024-12-09 23:55:08.361519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.905 [2024-12-09 23:55:08.366133] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:30.471 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:30.472 23:55:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:31.843 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:31.844 00:13:31.844 real 0m2.058s 00:13:31.844 user 0m0.011s 00:13:31.844 sys 0m0.002s 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.844 23:55:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:31.844 ************************************ 00:13:31.844 END TEST accel_copy_crc32c_C2 00:13:31.844 ************************************ 00:13:31.844 23:55:10 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:13:31.844 23:55:10 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:31.844 23:55:10 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.844 23:55:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:31.844 ************************************ 00:13:31.844 START TEST accel_dualcast 00:13:31.844 ************************************ 00:13:31.844 23:55:10 accel.accel_dualcast -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dualcast -y 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:13:31.844 23:55:10 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:13:31.844 [2024-12-09 23:55:10.253103] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:31.844 [2024-12-09 23:55:10.253186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480153 ] 00:13:32.101 [2024-12-09 23:55:10.362632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.102 [2024-12-09 23:55:10.463125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.102 [2024-12-09 23:55:10.468003] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.667 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:32.668 23:55:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:13:34.041 23:55:12 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:34.041 00:13:34.041 real 0m2.032s 00:13:34.041 user 0m1.578s 00:13:34.041 sys 0m0.267s 00:13:34.041 23:55:12 accel.accel_dualcast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.041 23:55:12 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:13:34.041 ************************************ 00:13:34.041 END TEST accel_dualcast 00:13:34.041 ************************************ 00:13:34.041 23:55:12 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:13:34.041 23:55:12 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:34.041 23:55:12 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.041 23:55:12 accel -- common/autotest_common.sh@10 -- # set +x 00:13:34.041 ************************************ 00:13:34.041 START TEST accel_compare 00:13:34.041 ************************************ 00:13:34.041 23:55:12 accel.accel_compare -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w compare -y 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:13:34.041 23:55:12 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:13:34.041 [2024-12-09 23:55:12.355577] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:34.041 [2024-12-09 23:55:12.355643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480439 ] 00:13:34.041 [2024-12-09 23:55:12.456759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.041 [2024-12-09 23:55:12.556028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.300 [2024-12-09 23:55:12.560980] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:34.866 23:55:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:13:36.238 23:55:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:36.238 00:13:36.238 real 0m2.068s 00:13:36.238 user 0m1.576s 00:13:36.238 sys 0m0.301s 00:13:36.238 23:55:14 accel.accel_compare -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.238 23:55:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:13:36.238 ************************************ 00:13:36.238 END TEST accel_compare 00:13:36.238 ************************************ 00:13:36.238 23:55:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:36.238 23:55:14 accel -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:36.238 23:55:14 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.238 23:55:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:36.238 ************************************ 00:13:36.238 START TEST accel_xor 00:13:36.238 ************************************ 00:13:36.238 23:55:14 accel.accel_xor -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w xor -y 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:36.238 23:55:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:36.238 [2024-12-09 23:55:14.474289] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:36.238 [2024-12-09 23:55:14.474431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480716 ] 00:13:36.238 [2024-12-09 23:55:14.611421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.238 [2024-12-09 23:55:14.716394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.238 [2024-12-09 23:55:14.721094] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.175 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:37.176 23:55:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:38.110 23:55:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:38.110 00:13:38.110 real 0m2.151s 00:13:38.110 user 0m1.644s 00:13:38.110 sys 0m0.313s 00:13:38.110 23:55:16 accel.accel_xor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.110 23:55:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:38.110 ************************************ 00:13:38.110 END TEST accel_xor 00:13:38.110 ************************************ 00:13:38.110 23:55:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:13:38.110 23:55:16 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:38.110 23:55:16 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.110 23:55:16 accel -- common/autotest_common.sh@10 -- # set +x 00:13:38.369 ************************************ 00:13:38.369 START TEST accel_xor 00:13:38.369 ************************************ 00:13:38.369 23:55:16 accel.accel_xor -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w xor -y -x 3 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:38.369 23:55:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:38.369 [2024-12-09 23:55:16.678261] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:38.369 [2024-12-09 23:55:16.678398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480926 ] 00:13:38.369 [2024-12-09 23:55:16.803089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.627 [2024-12-09 23:55:16.903843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.627 [2024-12-09 23:55:16.908718] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:39.194 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:39.195 23:55:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:40.568 23:55:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:40.568 00:13:40.568 real 0m2.122s 00:13:40.568 user 0m0.010s 00:13:40.568 sys 0m0.002s 00:13:40.568 23:55:18 accel.accel_xor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.568 23:55:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:40.568 ************************************ 00:13:40.568 END TEST accel_xor 00:13:40.568 ************************************ 00:13:40.568 23:55:18 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:40.568 23:55:18 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:40.568 23:55:18 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.568 23:55:18 accel -- common/autotest_common.sh@10 -- # set +x 00:13:40.568 ************************************ 00:13:40.569 START TEST accel_dif_verify 00:13:40.569 ************************************ 00:13:40.569 23:55:18 accel.accel_dif_verify -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_verify 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:40.569 23:55:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:13:40.569 [2024-12-09 23:55:18.856052] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:40.569 [2024-12-09 23:55:18.856121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481168 ] 00:13:40.569 [2024-12-09 23:55:18.952624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.569 [2024-12-09 23:55:19.035171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.569 [2024-12-09 23:55:19.039678] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.503 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:41.504 23:55:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:42.440 23:55:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:42.441 23:55:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:42.441 23:55:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:42.441 23:55:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:42.441 00:13:42.441 real 0m2.077s 00:13:42.441 user 0m0.013s 00:13:42.441 sys 0m0.000s 00:13:42.441 23:55:20 accel.accel_dif_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.441 23:55:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:13:42.441 ************************************ 00:13:42.441 END TEST accel_dif_verify 00:13:42.441 ************************************ 00:13:42.441 23:55:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:42.441 23:55:20 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:42.441 23:55:20 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.441 23:55:20 accel -- common/autotest_common.sh@10 -- # set +x 00:13:42.441 ************************************ 00:13:42.441 START TEST accel_dif_generate 00:13:42.441 ************************************ 00:13:42.441 23:55:20 accel.accel_dif_generate -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_generate 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:42.441 23:55:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:13:42.699 [2024-12-09 23:55:20.969029] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:42.699 [2024-12-09 23:55:20.969104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481444 ] 00:13:42.699 [2024-12-09 23:55:21.077932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.699 [2024-12-09 23:55:21.178574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.699 [2024-12-09 23:55:21.183228] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:43.266 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.524 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:43.525 23:55:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:44.900 23:55:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:44.900 00:13:44.900 real 0m2.046s 00:13:44.900 user 0m0.012s 00:13:44.900 sys 0m0.001s 00:13:44.900 23:55:22 accel.accel_dif_generate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.900 23:55:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:13:44.900 ************************************ 00:13:44.900 END TEST accel_dif_generate 00:13:44.900 ************************************ 00:13:44.900 23:55:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:44.900 23:55:23 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:44.900 23:55:23 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.900 23:55:23 accel -- common/autotest_common.sh@10 -- # set +x 00:13:44.900 ************************************ 00:13:44.900 START TEST accel_dif_generate_copy 00:13:44.900 ************************************ 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_generate_copy 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:44.900 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:13:44.900 [2024-12-09 23:55:23.055804] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:44.900 [2024-12-09 23:55:23.055876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481724 ] 00:13:44.900 [2024-12-09 23:55:23.162565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.900 [2024-12-09 23:55:23.256508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.900 [2024-12-09 23:55:23.261206] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.467 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:45.468 23:55:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:46.840 00:13:46.840 real 0m2.054s 00:13:46.840 user 0m0.011s 00:13:46.840 sys 0m0.002s 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.840 23:55:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:13:46.840 ************************************ 00:13:46.840 END TEST accel_dif_generate_copy 00:13:46.840 ************************************ 00:13:46.840 23:55:25 accel -- accel/accel.sh@114 -- # run_test accel_dix_verify accel_test -t 1 -w dix_verify 00:13:46.840 23:55:25 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:46.840 23:55:25 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.840 23:55:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:46.840 ************************************ 00:13:46.840 START TEST accel_dix_verify 00:13:46.840 ************************************ 00:13:46.840 23:55:25 accel.accel_dix_verify -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dix_verify 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@17 -- # local accel_module 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dix_verify 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dix_verify 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:46.840 23:55:25 accel.accel_dix_verify -- accel/accel.sh@41 -- # jq -r . 00:13:46.840 [2024-12-09 23:55:25.153702] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:46.840 [2024-12-09 23:55:25.153777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482009 ] 00:13:46.840 [2024-12-09 23:55:25.264628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.098 [2024-12-09 23:55:25.360632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.098 [2024-12-09 23:55:25.365086] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=0x1 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=dix_verify 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@23 -- # accel_opc=dix_verify 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=software 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=32 00:13:47.665 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=32 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=1 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val=No 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:47.666 23:55:25 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@20 -- # val= 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ -n dix_verify ]] 00:13:49.040 23:55:27 accel.accel_dix_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:49.040 00:13:49.040 real 0m2.032s 00:13:49.040 user 0m0.013s 00:13:49.040 sys 0m0.000s 00:13:49.040 23:55:27 accel.accel_dix_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.040 23:55:27 accel.accel_dix_verify -- common/autotest_common.sh@10 -- # set +x 00:13:49.040 ************************************ 00:13:49.040 END TEST accel_dix_verify 00:13:49.040 ************************************ 00:13:49.040 23:55:27 accel -- accel/accel.sh@115 -- # run_test accel_dix_generate accel_test -t 1 -w dif_generate 00:13:49.040 23:55:27 accel -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:49.040 23:55:27 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.040 23:55:27 accel -- common/autotest_common.sh@10 -- # set +x 00:13:49.040 ************************************ 00:13:49.040 START TEST accel_dix_generate 00:13:49.040 ************************************ 00:13:49.040 23:55:27 accel.accel_dix_generate -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w dif_generate 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@17 -- # local accel_module 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:49.040 23:55:27 accel.accel_dix_generate -- accel/accel.sh@41 -- # jq -r . 00:13:49.040 [2024-12-09 23:55:27.275915] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:49.040 [2024-12-09 23:55:27.275989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482287 ] 00:13:49.040 [2024-12-09 23:55:27.425841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.040 [2024-12-09 23:55:27.527890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.040 [2024-12-09 23:55:27.532524] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=0x1 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.973 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=software 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=32 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=32 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=1 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val=No 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:49.974 23:55:28 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@20 -- # val= 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:50.907 23:55:29 accel.accel_dix_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:50.907 00:13:50.907 real 0m2.126s 00:13:50.907 user 0m0.011s 00:13:50.907 sys 0m0.003s 00:13:50.907 23:55:29 accel.accel_dix_generate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.907 23:55:29 accel.accel_dix_generate -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 ************************************ 00:13:50.907 END TEST accel_dix_generate 00:13:50.907 ************************************ 00:13:50.907 23:55:29 accel -- accel/accel.sh@117 -- # [[ y == y ]] 00:13:50.907 23:55:29 accel -- accel/accel.sh@118 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:50.907 23:55:29 accel -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:50.907 23:55:29 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.907 23:55:29 accel -- common/autotest_common.sh@10 -- # set +x 00:13:50.907 ************************************ 00:13:50.907 START TEST accel_comp 00:13:50.907 ************************************ 00:13:50.907 23:55:29 accel.accel_comp -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:50.907 23:55:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:13:50.907 23:55:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:13:50.907 23:55:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:50.907 23:55:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:50.907 23:55:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:13:51.165 23:55:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:13:51.165 [2024-12-09 23:55:29.442117] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:51.165 [2024-12-09 23:55:29.442174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482483 ] 00:13:51.165 [2024-12-09 23:55:29.526011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.165 [2024-12-09 23:55:29.619529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.165 [2024-12-09 23:55:29.624182] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:51.732 23:55:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:53.105 23:55:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:53.105 00:13:53.105 real 0m2.055s 00:13:53.105 user 0m1.609s 00:13:53.106 sys 0m0.252s 00:13:53.106 23:55:31 accel.accel_comp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.106 23:55:31 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:53.106 ************************************ 00:13:53.106 END TEST accel_comp 00:13:53.106 ************************************ 00:13:53.106 23:55:31 accel -- accel/accel.sh@119 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:53.106 23:55:31 accel -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:53.106 23:55:31 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.106 23:55:31 accel -- common/autotest_common.sh@10 -- # set +x 00:13:53.106 ************************************ 00:13:53.106 START TEST accel_decomp 00:13:53.106 ************************************ 00:13:53.106 23:55:31 accel.accel_decomp -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:53.106 23:55:31 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:53.106 [2024-12-09 23:55:31.539344] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:53.106 [2024-12-09 23:55:31.539397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482754 ] 00:13:53.364 [2024-12-09 23:55:31.636822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.364 [2024-12-09 23:55:31.744267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.364 [2024-12-09 23:55:31.748997] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:53.930 23:55:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:55.303 23:55:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:55.303 00:13:55.303 real 0m2.046s 00:13:55.303 user 0m1.605s 00:13:55.303 sys 0m0.249s 00:13:55.303 23:55:33 accel.accel_decomp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.303 23:55:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:55.303 ************************************ 00:13:55.303 END TEST accel_decomp 00:13:55.303 ************************************ 00:13:55.303 23:55:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:55.303 23:55:33 accel -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:55.303 23:55:33 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.303 23:55:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:55.303 ************************************ 00:13:55.303 START TEST accel_decomp_full 00:13:55.303 ************************************ 00:13:55.303 23:55:33 accel.accel_decomp_full -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:13:55.303 23:55:33 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:13:55.303 [2024-12-09 23:55:33.640946] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:55.303 [2024-12-09 23:55:33.641003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483042 ] 00:13:55.303 [2024-12-09 23:55:33.787437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.562 [2024-12-09 23:55:33.892165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.562 [2024-12-09 23:55:33.896884] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:56.128 23:55:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:57.500 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:57.501 23:55:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:57.501 00:13:57.501 real 0m2.166s 00:13:57.501 user 0m0.012s 00:13:57.501 sys 0m0.001s 00:13:57.501 23:55:35 accel.accel_decomp_full -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.501 23:55:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:13:57.501 ************************************ 00:13:57.501 END TEST accel_decomp_full 00:13:57.501 ************************************ 00:13:57.501 23:55:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:57.501 23:55:35 accel -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:57.501 23:55:35 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.501 23:55:35 accel -- common/autotest_common.sh@10 -- # set +x 00:13:57.501 ************************************ 00:13:57.501 START TEST accel_decomp_mcore 00:13:57.501 ************************************ 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:57.501 23:55:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:57.501 [2024-12-09 23:55:35.848478] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:57.501 [2024-12-09 23:55:35.848529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483332 ] 00:13:57.501 [2024-12-09 23:55:35.957130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.758 [2024-12-09 23:55:36.079641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.758 [2024-12-09 23:55:36.079695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.758 [2024-12-09 23:55:36.079745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.758 [2024-12-09 23:55:36.079748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.758 [2024-12-09 23:55:36.084329] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:58.016 23:55:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:59.388 00:13:59.388 real 0m1.902s 00:13:59.388 user 0m6.207s 00:13:59.388 sys 0m0.206s 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.388 23:55:37 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:59.388 ************************************ 00:13:59.388 END TEST accel_decomp_mcore 00:13:59.388 ************************************ 00:13:59.388 23:55:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:59.388 23:55:37 accel -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:59.388 23:55:37 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.388 23:55:37 accel -- common/autotest_common.sh@10 -- # set +x 00:13:59.388 ************************************ 00:13:59.388 START TEST accel_decomp_full_mcore 00:13:59.388 ************************************ 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:59.388 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:59.389 23:55:37 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:59.389 [2024-12-09 23:55:37.796612] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:13:59.389 [2024-12-09 23:55:37.796663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483623 ] 00:13:59.389 [2024-12-09 23:55:37.901400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.646 [2024-12-09 23:55:38.018102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.646 [2024-12-09 23:55:38.018186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.646 [2024-12-09 23:55:38.018252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.646 [2024-12-09 23:55:38.018255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.646 [2024-12-09 23:55:38.022715] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:00.211 23:55:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:01.585 00:14:01.585 real 0m1.935s 00:14:01.585 user 0m6.336s 00:14:01.585 sys 0m0.222s 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.585 23:55:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:14:01.585 ************************************ 00:14:01.585 END TEST accel_decomp_full_mcore 00:14:01.585 ************************************ 00:14:01.585 23:55:39 accel -- accel/accel.sh@123 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:14:01.585 23:55:39 accel -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:14:01.585 23:55:39 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.585 23:55:39 accel -- common/autotest_common.sh@10 -- # set +x 00:14:01.585 ************************************ 00:14:01.585 START TEST accel_decomp_mthread 00:14:01.585 ************************************ 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:14:01.585 23:55:39 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:14:01.585 [2024-12-09 23:55:39.802111] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:01.585 [2024-12-09 23:55:39.802264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483795 ] 00:14:01.585 [2024-12-09 23:55:39.883289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.585 [2024-12-09 23:55:39.942195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.585 [2024-12-09 23:55:39.946611] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:02.151 23:55:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:03.524 00:14:03.524 real 0m1.902s 00:14:03.524 user 0m1.486s 00:14:03.524 sys 0m0.226s 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.524 23:55:41 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:14:03.524 ************************************ 00:14:03.524 END TEST accel_decomp_mthread 00:14:03.524 ************************************ 00:14:03.524 23:55:41 accel -- accel/accel.sh@124 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:14:03.524 23:55:41 accel -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:03.524 23:55:41 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.524 23:55:41 accel -- common/autotest_common.sh@10 -- # set +x 00:14:03.524 ************************************ 00:14:03.524 START TEST accel_decomp_full_mthread 00:14:03.524 ************************************ 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1129 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:14:03.524 23:55:41 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:14:03.524 [2024-12-09 23:55:41.752624] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:03.524 [2024-12-09 23:55:41.752777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484072 ] 00:14:03.524 [2024-12-09 23:55:41.888530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.524 [2024-12-09 23:55:41.995717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.524 [2024-12-09 23:55:42.000679] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.090 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:14:04.347 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.347 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.347 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:04.348 23:55:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.720 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:05.721 00:14:05.721 real 0m2.157s 00:14:05.721 user 0m1.657s 00:14:05.721 sys 0m0.309s 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.721 23:55:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:14:05.721 ************************************ 00:14:05.721 END TEST accel_decomp_full_mthread 00:14:05.721 ************************************ 00:14:05.721 23:55:43 accel -- accel/accel.sh@126 -- # [[ n == y ]] 00:14:05.721 23:55:43 accel -- accel/accel.sh@139 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:05.721 23:55:43 accel -- accel/accel.sh@139 -- # build_accel_config 00:14:05.721 23:55:43 accel -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:05.721 23:55:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:05.721 23:55:43 accel -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.721 23:55:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:05.721 23:55:43 accel -- common/autotest_common.sh@10 -- # set +x 00:14:05.721 23:55:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:05.721 23:55:43 accel -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:14:05.721 23:55:43 accel -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "ioat_scan_accel_module"}') 00:14:05.721 23:55:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:05.721 23:55:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:14:05.721 23:55:43 accel -- accel/accel.sh@41 -- # jq -r . 00:14:05.721 ************************************ 00:14:05.721 START TEST accel_dif_functional_tests 00:14:05.721 ************************************ 00:14:05.721 23:55:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:05.721 [2024-12-09 23:55:44.019275] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:05.721 [2024-12-09 23:55:44.019422] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484353 ] 00:14:05.721 [2024-12-09 23:55:44.164492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.978 [2024-12-09 23:55:44.268035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.978 [2024-12-09 23:55:44.268091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.979 [2024-12-09 23:55:44.268095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.979 [2024-12-09 23:55:44.272632] accel_ioat_rpc.c: 22:rpc_ioat_scan_accel_module: *NOTICE*: Enabling IOAT 00:14:07.352 [2024-12-09 23:55:45.584397] 'OCF_Core' volume operations registered 00:14:07.352 [2024-12-09 23:55:45.584453] 'OCF_Cache' volume operations registered 00:14:07.352 [2024-12-09 23:55:45.588756] 'OCF Composite' volume operations registered 00:14:07.352 [2024-12-09 23:55:45.593060] 'SPDK_block_device' volume operations registered 00:14:07.352 00:14:07.352 00:14:07.352 CUnit - A unit testing framework for C - Version 2.1-3 00:14:07.352 http://cunit.sourceforge.net/ 00:14:07.352 00:14:07.352 00:14:07.352 Suite: accel_dif 00:14:07.352 Test: verify: DIF generated, GUARD check ...passed 00:14:07.352 Test: verify: DIX generated, GUARD check ...passed 00:14:07.352 Test: verify: DIF generated, APPTAG check ...passed 00:14:07.352 Test: verify: DIX generated, APPTAG check ...passed 00:14:07.352 Test: verify: DIF generated, REFTAG check ...passed 00:14:07.352 Test: verify: DIX generated, REFTAG check ...passed 00:14:07.352 Test: verify: DIX generated, all flags check ...passed 00:14:07.352 Test: verify: DIF not generated, GUARD check ...[2024-12-09 23:55:45.597215] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:07.352 passed 00:14:07.352 Test: verify: DIX not generated, GUARD check ...[2024-12-09 23:55:45.597281] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:14:07.352 passed 00:14:07.352 Test: verify: DIF not generated, APPTAG check ...[2024-12-09 23:55:45.597315] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:07.352 passed 00:14:07.352 Test: verify: DIX not generated, APPTAG check ...[2024-12-09 23:55:45.597354] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=0 00:14:07.352 passed 00:14:07.352 Test: verify: DIF not generated, REFTAG check ...[2024-12-09 23:55:45.597385] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:07.352 passed 00:14:07.352 Test: verify: DIX not generated, REFTAG check ...[2024-12-09 23:55:45.597426] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=0 00:14:07.352 passed 00:14:07.352 Test: verify: DIX not generated, all flags check ...[2024-12-09 23:55:45.597474] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:14:07.352 passed 00:14:07.352 Test: verify: DIX guard not generated, all flags check ...[2024-12-09 23:55:45.597516] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=0, Actual=7867 00:14:07.352 passed 00:14:07.352 Test: verify: DIX apptag not generated, all flags check ...[2024-12-09 23:55:45.597559] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=0 00:14:07.352 passed 00:14:07.352 Test: verify: DIX reftag not generated, all flags check ...[2024-12-09 23:55:45.597604] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=0 00:14:07.352 passed 00:14:07.352 Test: verify: DIF APPTAG correct, APPTAG check ...passed 00:14:07.352 Test: verify: DIX APPTAG correct, APPTAG check ...passed 00:14:07.352 Test: verify: DIF APPTAG incorrect, APPTAG check ...[2024-12-09 23:55:45.597702] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:14:07.352 passed 00:14:07.352 Test: verify: DIX APPTAG incorrect, APPTAG check ...[2024-12-09 23:55:45.597758] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:14:07.352 passed 00:14:07.352 Test: verify: DIF APPTAG incorrect, no APPTAG check ...passed 00:14:07.352 Test: verify: DIX APPTAG incorrect, no APPTAG check ...passed 00:14:07.352 Test: verify: DIF REFTAG incorrect, REFTAG ignore ...passed 00:14:07.352 Test: verify: DIX REFTAG incorrect, REFTAG ignore ...passed 00:14:07.352 Test: verify: DIF REFTAG_INIT correct, REFTAG check ...passed 00:14:07.352 Test: verify: DIX REFTAG_INIT correct, REFTAG check ...passed 00:14:07.352 Test: verify: DIF REFTAG_INIT incorrect, REFTAG check ...[2024-12-09 23:55:45.598026] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:14:07.352 passed 00:14:07.352 Test: verify: DIX REFTAG_INIT incorrect, REFTAG check ...[2024-12-09 23:55:45.598077] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:14:07.352 passed 00:14:07.352 Test: verify copy: DIF generated, GUARD check ...passed 00:14:07.352 Test: verify copy: DIF generated, APPTAG check ...passed 00:14:07.352 Test: verify copy: DIF generated, REFTAG check ...passed 00:14:07.352 Test: verify copy: DIF not generated, GUARD check ...[2024-12-09 23:55:45.598240] dif.c: 922:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:07.352 passed 00:14:07.352 Test: verify copy: DIF not generated, APPTAG check ...[2024-12-09 23:55:45.598275] dif.c: 937:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:07.352 passed 00:14:07.352 Test: verify copy: DIF not generated, REFTAG check ...[2024-12-09 23:55:45.598306] dif.c: 872:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:07.352 passed 00:14:07.352 Test: generate copy: DIF generated, GUARD check ...passed 00:14:07.352 Test: generate copy: DIF generated, APTTAG check ...passed 00:14:07.352 Test: generate copy: DIF generated, REFTAG check ...passed 00:14:07.352 Test: generate: DIX generated, GUARD check ...passed 00:14:07.352 Test: generate: DIX generated, APTTAG check ...passed 00:14:07.352 Test: generate: DIX generated, REFTAG check ...passed 00:14:07.352 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:14:07.352 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:14:07.352 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:14:07.352 Test: generate copy: DIF iovecs-len validate ...[2024-12-09 23:55:45.598737] dif.c:1291:_spdk_dif_insert_copy: *ERROR*: Size of iovec arrays are not valid. 00:14:07.352 passed 00:14:07.352 Test: generate copy: DIF buffer alignment validate ...passed 00:14:07.352 Test: generate copy sequence: DIF generated, GUARD check ...passed 00:14:07.352 Test: generate copy sequence: DIF generated, APTTAG check ...passed 00:14:07.352 Test: generate copy sequence: DIF generated, REFTAG check ...passed 00:14:07.352 Test: verify copy sequence: DIF generated, GUARD check ...passed 00:14:07.352 Test: verify copy sequence: DIF generated, APPTAG check ...passed 00:14:07.352 Test: verify copy sequence: DIF generated, REFTAG check ...passed 00:14:07.352 00:14:07.352 Run Summary: Type Total Ran Passed Failed Inactive 00:14:07.352 suites 1 1 n/a 0 0 00:14:07.352 tests 52 52 52 0 0 00:14:07.352 asserts 259 259 259 0 n/a 00:14:07.352 00:14:07.352 Elapsed time = 0.005 seconds 00:14:07.917 00:14:07.917 real 0m2.486s 00:14:07.917 user 0m4.177s 00:14:07.917 sys 0m0.409s 00:14:07.917 23:55:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.917 23:55:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:14:07.917 ************************************ 00:14:07.917 END TEST accel_dif_functional_tests 00:14:07.917 ************************************ 00:14:08.175 00:14:08.175 real 0m55.093s 00:14:08.175 user 0m53.084s 00:14:08.175 sys 0m8.945s 00:14:08.175 23:55:46 accel -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.175 23:55:46 accel -- common/autotest_common.sh@10 -- # set +x 00:14:08.175 ************************************ 00:14:08.175 END TEST accel 00:14:08.175 ************************************ 00:14:08.175 23:55:46 -- spdk/autotest.sh@173 -- # run_test accel_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh 00:14:08.175 23:55:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.175 23:55:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.175 23:55:46 -- common/autotest_common.sh@10 -- # set +x 00:14:08.175 ************************************ 00:14:08.175 START TEST accel_rpc 00:14:08.175 ************************************ 00:14:08.175 23:55:46 accel_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh 00:14:08.175 * Looking for test storage... 00:14:08.175 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel 00:14:08.175 23:55:46 accel_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.175 23:55:46 accel_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.175 23:55:46 accel_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.433 23:55:46 accel_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@345 -- # : 1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@353 -- # local d=1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@355 -- # echo 1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@353 -- # local d=2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@355 -- # echo 2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.433 23:55:46 accel_rpc -- scripts/common.sh@368 -- # return 0 00:14:08.433 23:55:46 accel_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.433 23:55:46 accel_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.434 --rc genhtml_branch_coverage=1 00:14:08.434 --rc genhtml_function_coverage=1 00:14:08.434 --rc genhtml_legend=1 00:14:08.434 --rc geninfo_all_blocks=1 00:14:08.434 --rc geninfo_unexecuted_blocks=1 00:14:08.434 00:14:08.434 ' 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.434 --rc genhtml_branch_coverage=1 00:14:08.434 --rc genhtml_function_coverage=1 00:14:08.434 --rc genhtml_legend=1 00:14:08.434 --rc geninfo_all_blocks=1 00:14:08.434 --rc geninfo_unexecuted_blocks=1 00:14:08.434 00:14:08.434 ' 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.434 --rc genhtml_branch_coverage=1 00:14:08.434 --rc genhtml_function_coverage=1 00:14:08.434 --rc genhtml_legend=1 00:14:08.434 --rc geninfo_all_blocks=1 00:14:08.434 --rc geninfo_unexecuted_blocks=1 00:14:08.434 00:14:08.434 ' 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.434 --rc genhtml_branch_coverage=1 00:14:08.434 --rc genhtml_function_coverage=1 00:14:08.434 --rc genhtml_legend=1 00:14:08.434 --rc geninfo_all_blocks=1 00:14:08.434 --rc geninfo_unexecuted_blocks=1 00:14:08.434 00:14:08.434 ' 00:14:08.434 23:55:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:08.434 23:55:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=484702 00:14:08.434 23:55:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:08.434 23:55:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 484702 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@835 -- # '[' -z 484702 ']' 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.434 23:55:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.434 [2024-12-09 23:55:46.812728] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:08.434 [2024-12-09 23:55:46.812857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484702 ] 00:14:08.434 [2024-12-09 23:55:46.911227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.691 [2024-12-09 23:55:46.996714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.691 23:55:47 accel_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.691 23:55:47 accel_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:08.691 23:55:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:14:08.691 23:55:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:14:08.691 23:55:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:14:08.691 23:55:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:14:08.691 23:55:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:14:08.691 23:55:47 accel_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.691 23:55:47 accel_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.691 23:55:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.691 ************************************ 00:14:08.691 START TEST accel_assign_opcode 00:14:08.691 ************************************ 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1129 -- # accel_assign_opcode_test_suite 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:08.691 [2024-12-09 23:55:47.085520] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.691 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:08.692 [2024-12-09 23:55:47.093520] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:14:08.692 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.692 23:55:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:14:08.692 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.692 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:09.257 [2024-12-09 23:55:47.469937] 'OCF_Core' volume operations registered 00:14:09.257 [2024-12-09 23:55:47.469987] 'OCF_Cache' volume operations registered 00:14:09.257 [2024-12-09 23:55:47.477351] 'OCF Composite' volume operations registered 00:14:09.257 [2024-12-09 23:55:47.485913] 'SPDK_block_device' volume operations registered 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.257 software 00:14:09.257 00:14:09.257 real 0m0.687s 00:14:09.257 user 0m0.056s 00:14:09.257 sys 0m0.011s 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.257 23:55:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:14:09.257 ************************************ 00:14:09.257 END TEST accel_assign_opcode 00:14:09.257 ************************************ 00:14:09.515 23:55:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 484702 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@954 -- # '[' -z 484702 ']' 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@958 -- # kill -0 484702 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@959 -- # uname 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484702 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484702' 00:14:09.515 killing process with pid 484702 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@973 -- # kill 484702 00:14:09.515 23:55:47 accel_rpc -- common/autotest_common.sh@978 -- # wait 484702 00:14:10.080 00:14:10.080 real 0m2.024s 00:14:10.080 user 0m1.692s 00:14:10.080 sys 0m0.783s 00:14:10.080 23:55:48 accel_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.080 23:55:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.080 ************************************ 00:14:10.080 END TEST accel_rpc 00:14:10.081 ************************************ 00:14:10.081 23:55:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh 00:14:10.081 23:55:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.081 23:55:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.081 23:55:48 -- common/autotest_common.sh@10 -- # set +x 00:14:10.339 ************************************ 00:14:10.339 START TEST app_cmdline 00:14:10.339 ************************************ 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh 00:14:10.339 * Looking for test storage... 00:14:10.339 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.339 23:55:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:10.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.339 --rc genhtml_branch_coverage=1 00:14:10.339 --rc genhtml_function_coverage=1 00:14:10.339 --rc genhtml_legend=1 00:14:10.339 --rc geninfo_all_blocks=1 00:14:10.339 --rc geninfo_unexecuted_blocks=1 00:14:10.339 00:14:10.339 ' 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:10.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.339 --rc genhtml_branch_coverage=1 00:14:10.339 --rc genhtml_function_coverage=1 00:14:10.339 --rc genhtml_legend=1 00:14:10.339 --rc geninfo_all_blocks=1 00:14:10.339 --rc geninfo_unexecuted_blocks=1 00:14:10.339 00:14:10.339 ' 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:10.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.339 --rc genhtml_branch_coverage=1 00:14:10.339 --rc genhtml_function_coverage=1 00:14:10.339 --rc genhtml_legend=1 00:14:10.339 --rc geninfo_all_blocks=1 00:14:10.339 --rc geninfo_unexecuted_blocks=1 00:14:10.339 00:14:10.339 ' 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:10.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.339 --rc genhtml_branch_coverage=1 00:14:10.339 --rc genhtml_function_coverage=1 00:14:10.339 --rc genhtml_legend=1 00:14:10.339 --rc geninfo_all_blocks=1 00:14:10.339 --rc geninfo_unexecuted_blocks=1 00:14:10.339 00:14:10.339 ' 00:14:10.339 23:55:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:14:10.339 23:55:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=485036 00:14:10.339 23:55:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:14:10.339 23:55:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 485036 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 485036 ']' 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.339 23:55:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:10.597 [2024-12-09 23:55:48.879489] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:10.597 [2024-12-09 23:55:48.879565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485036 ] 00:14:10.597 [2024-12-09 23:55:48.944120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.597 [2024-12-09 23:55:49.003492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.855 [2024-12-09 23:55:49.297653] 'OCF_Core' volume operations registered 00:14:10.855 [2024-12-09 23:55:49.297757] 'OCF_Cache' volume operations registered 00:14:10.855 [2024-12-09 23:55:49.304745] 'OCF Composite' volume operations registered 00:14:10.855 [2024-12-09 23:55:49.311891] 'SPDK_block_device' volume operations registered 00:14:11.113 23:55:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.113 23:55:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:14:11.113 23:55:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:14:11.371 { 00:14:11.371 "version": "SPDK v25.01-pre git sha1 1ae735a5d", 00:14:11.371 "fields": { 00:14:11.371 "major": 25, 00:14:11.371 "minor": 1, 00:14:11.371 "patch": 0, 00:14:11.371 "suffix": "-pre", 00:14:11.371 "commit": "1ae735a5d" 00:14:11.371 } 00:14:11.371 } 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:14:11.371 23:55:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py ]] 00:14:11.371 23:55:49 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:11.936 request: 00:14:11.936 { 00:14:11.936 "method": "env_dpdk_get_mem_stats", 00:14:11.936 "req_id": 1 00:14:11.936 } 00:14:11.936 Got JSON-RPC error response 00:14:11.936 response: 00:14:11.936 { 00:14:11.936 "code": -32601, 00:14:11.936 "message": "Method not found" 00:14:11.936 } 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.193 23:55:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 485036 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 485036 ']' 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 485036 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485036 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485036' 00:14:12.193 killing process with pid 485036 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 485036 00:14:12.193 23:55:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 485036 00:14:12.758 00:14:12.758 real 0m2.653s 00:14:12.758 user 0m3.083s 00:14:12.758 sys 0m0.756s 00:14:12.758 23:55:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.758 23:55:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:12.758 ************************************ 00:14:12.758 END TEST app_cmdline 00:14:12.758 ************************************ 00:14:13.016 23:55:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh 00:14:13.016 23:55:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.016 23:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.016 23:55:51 -- common/autotest_common.sh@10 -- # set +x 00:14:13.016 ************************************ 00:14:13.016 START TEST version 00:14:13.016 ************************************ 00:14:13.016 23:55:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh 00:14:13.016 * Looking for test storage... 00:14:13.016 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app 00:14:13.016 23:55:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:13.016 23:55:51 version -- common/autotest_common.sh@1711 -- # lcov --version 00:14:13.016 23:55:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:13.016 23:55:51 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:13.016 23:55:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.016 23:55:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.016 23:55:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.016 23:55:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.016 23:55:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.016 23:55:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.016 23:55:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.016 23:55:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.016 23:55:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.016 23:55:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.016 23:55:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.016 23:55:51 version -- scripts/common.sh@344 -- # case "$op" in 00:14:13.016 23:55:51 version -- scripts/common.sh@345 -- # : 1 00:14:13.016 23:55:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.017 23:55:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.017 23:55:51 version -- scripts/common.sh@365 -- # decimal 1 00:14:13.017 23:55:51 version -- scripts/common.sh@353 -- # local d=1 00:14:13.017 23:55:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.017 23:55:51 version -- scripts/common.sh@355 -- # echo 1 00:14:13.017 23:55:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.017 23:55:51 version -- scripts/common.sh@366 -- # decimal 2 00:14:13.017 23:55:51 version -- scripts/common.sh@353 -- # local d=2 00:14:13.017 23:55:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.017 23:55:51 version -- scripts/common.sh@355 -- # echo 2 00:14:13.017 23:55:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.017 23:55:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.017 23:55:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.017 23:55:51 version -- scripts/common.sh@368 -- # return 0 00:14:13.017 23:55:51 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.017 23:55:51 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:13.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.017 --rc genhtml_branch_coverage=1 00:14:13.017 --rc genhtml_function_coverage=1 00:14:13.017 --rc genhtml_legend=1 00:14:13.017 --rc geninfo_all_blocks=1 00:14:13.017 --rc geninfo_unexecuted_blocks=1 00:14:13.017 00:14:13.017 ' 00:14:13.017 23:55:51 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:13.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.017 --rc genhtml_branch_coverage=1 00:14:13.017 --rc genhtml_function_coverage=1 00:14:13.017 --rc genhtml_legend=1 00:14:13.017 --rc geninfo_all_blocks=1 00:14:13.017 --rc geninfo_unexecuted_blocks=1 00:14:13.017 00:14:13.017 ' 00:14:13.017 23:55:51 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:13.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.017 --rc genhtml_branch_coverage=1 00:14:13.017 --rc genhtml_function_coverage=1 00:14:13.017 --rc genhtml_legend=1 00:14:13.017 --rc geninfo_all_blocks=1 00:14:13.017 --rc geninfo_unexecuted_blocks=1 00:14:13.017 00:14:13.017 ' 00:14:13.017 23:55:51 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:13.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.017 --rc genhtml_branch_coverage=1 00:14:13.017 --rc genhtml_function_coverage=1 00:14:13.017 --rc genhtml_legend=1 00:14:13.017 --rc geninfo_all_blocks=1 00:14:13.017 --rc geninfo_unexecuted_blocks=1 00:14:13.017 00:14:13.017 ' 00:14:13.017 23:55:51 version -- app/version.sh@17 -- # get_header_version major 00:14:13.017 23:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # cut -f2 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:14:13.017 23:55:51 version -- app/version.sh@17 -- # major=25 00:14:13.017 23:55:51 version -- app/version.sh@18 -- # get_header_version minor 00:14:13.017 23:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # cut -f2 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:14:13.017 23:55:51 version -- app/version.sh@18 -- # minor=1 00:14:13.017 23:55:51 version -- app/version.sh@19 -- # get_header_version patch 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # cut -f2 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:14:13.017 23:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:14:13.017 23:55:51 version -- app/version.sh@19 -- # patch=0 00:14:13.017 23:55:51 version -- app/version.sh@20 -- # get_header_version suffix 00:14:13.017 23:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # cut -f2 00:14:13.017 23:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:14:13.017 23:55:51 version -- app/version.sh@20 -- # suffix=-pre 00:14:13.017 23:55:51 version -- app/version.sh@22 -- # version=25.1 00:14:13.017 23:55:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:14:13.017 23:55:51 version -- app/version.sh@28 -- # version=25.1rc0 00:14:13.017 23:55:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python 00:14:13.017 23:55:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:14:13.275 23:55:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:14:13.275 23:55:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:14:13.275 00:14:13.275 real 0m0.244s 00:14:13.275 user 0m0.173s 00:14:13.275 sys 0m0.097s 00:14:13.275 23:55:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.275 23:55:51 version -- common/autotest_common.sh@10 -- # set +x 00:14:13.275 ************************************ 00:14:13.275 END TEST version 00:14:13.275 ************************************ 00:14:13.275 23:55:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:14:13.275 23:55:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:14:13.275 23:55:51 -- spdk/autotest.sh@194 -- # uname -s 00:14:13.275 23:55:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:14:13.275 23:55:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:13.275 23:55:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:13.275 23:55:51 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:14:13.276 23:55:51 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme 00:14:13.276 23:55:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:13.276 23:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.276 23:55:51 -- common/autotest_common.sh@10 -- # set +x 00:14:13.276 ************************************ 00:14:13.276 START TEST blockdev_nvme 00:14:13.276 ************************************ 00:14:13.276 23:55:51 blockdev_nvme -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme 00:14:13.276 * Looking for test storage... 00:14:13.276 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev 00:14:13.276 23:55:51 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:13.276 23:55:51 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:13.276 23:55:51 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:13.534 23:55:51 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.534 23:55:51 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:14:13.534 23:55:51 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.534 23:55:51 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:13.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.534 --rc genhtml_branch_coverage=1 00:14:13.534 --rc genhtml_function_coverage=1 00:14:13.534 --rc genhtml_legend=1 00:14:13.534 --rc geninfo_all_blocks=1 00:14:13.534 --rc geninfo_unexecuted_blocks=1 00:14:13.534 00:14:13.534 ' 00:14:13.534 23:55:51 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:13.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.534 --rc genhtml_branch_coverage=1 00:14:13.534 --rc genhtml_function_coverage=1 00:14:13.534 --rc genhtml_legend=1 00:14:13.534 --rc geninfo_all_blocks=1 00:14:13.534 --rc geninfo_unexecuted_blocks=1 00:14:13.534 00:14:13.534 ' 00:14:13.534 23:55:51 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:13.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.534 --rc genhtml_branch_coverage=1 00:14:13.534 --rc genhtml_function_coverage=1 00:14:13.534 --rc genhtml_legend=1 00:14:13.534 --rc geninfo_all_blocks=1 00:14:13.534 --rc geninfo_unexecuted_blocks=1 00:14:13.534 00:14:13.534 ' 00:14:13.534 23:55:51 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:13.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.534 --rc genhtml_branch_coverage=1 00:14:13.534 --rc genhtml_function_coverage=1 00:14:13.534 --rc genhtml_legend=1 00:14:13.534 --rc geninfo_all_blocks=1 00:14:13.535 --rc geninfo_unexecuted_blocks=1 00:14:13.535 00:14:13.535 ' 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:14:13.535 23:55:51 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=485563 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:13.535 23:55:51 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 485563 00:14:13.535 23:55:51 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 485563 ']' 00:14:13.535 23:55:51 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.535 23:55:51 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.535 23:55:51 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.535 23:55:51 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.535 23:55:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.535 [2024-12-09 23:55:51.903165] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:13.535 [2024-12-09 23:55:51.903288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485563 ] 00:14:13.535 [2024-12-09 23:55:52.013712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.793 [2024-12-09 23:55:52.105318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.051 [2024-12-09 23:55:52.494214] 'OCF_Core' volume operations registered 00:14:14.051 [2024-12-09 23:55:52.494314] 'OCF_Cache' volume operations registered 00:14:14.051 [2024-12-09 23:55:52.502980] 'OCF Composite' volume operations registered 00:14:14.051 [2024-12-09 23:55:52.510674] 'SPDK_block_device' volume operations registered 00:14:14.310 23:55:52 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.310 23:55:52 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:14:14.310 23:55:52 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:14.310 23:55:52 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:14:14.310 23:55:52 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:14:14.310 23:55:52 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:14:14.310 23:55:52 blockdev_nvme -- bdev/blockdev.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:14:14.310 23:55:52 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:84:00.0" } } ] }'\''' 00:14:14.310 23:55:52 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.310 23:55:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dda6641b-2b58-4ffe-9396-42c26f1f5b94"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 512,' ' "num_blocks": 1953525168,' ' "uuid": "dda6641b-2b58-4ffe-9396-42c26f1f5b94",' ' "numa_id": 1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:84:00.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:84:00.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x8086",' ' "model_number": "INTEL SSDPE2KX010T8",' ' "serial_number": "BTLJ724400Z71P0FGN",' ' "firmware_revision": "VDV10184",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 1,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.2"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:17.630 23:55:55 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 485563 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 485563 ']' 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 485563 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 485563 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 485563' 00:14:17.630 killing process with pid 485563 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 485563 00:14:17.630 23:55:55 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 485563 00:14:19.530 23:55:58 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:19.530 23:55:58 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:19.530 23:55:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:19.530 23:55:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.530 23:55:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.530 ************************************ 00:14:19.530 START TEST bdev_hello_world 00:14:19.530 ************************************ 00:14:19.530 23:55:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:19.789 [2024-12-09 23:55:58.104530] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:19.789 [2024-12-09 23:55:58.104679] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486258 ] 00:14:19.789 [2024-12-09 23:55:58.240914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.047 [2024-12-09 23:55:58.333823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.306 [2024-12-09 23:55:58.713954] 'OCF_Core' volume operations registered 00:14:20.306 [2024-12-09 23:55:58.714054] 'OCF_Cache' volume operations registered 00:14:20.306 [2024-12-09 23:55:58.722652] 'OCF Composite' volume operations registered 00:14:20.306 [2024-12-09 23:55:58.731858] 'SPDK_block_device' volume operations registered 00:14:23.583 [2024-12-09 23:56:01.619555] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:23.583 [2024-12-09 23:56:01.619642] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:14:23.583 [2024-12-09 23:56:01.619697] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:23.583 [2024-12-09 23:56:01.624330] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:23.583 [2024-12-09 23:56:01.624700] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:23.583 [2024-12-09 23:56:01.624760] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:23.583 [2024-12-09 23:56:01.625480] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:23.583 00:14:23.583 [2024-12-09 23:56:01.625545] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:24.954 00:14:24.954 real 0m5.395s 00:14:24.954 user 0m3.831s 00:14:24.954 sys 0m0.801s 00:14:24.954 23:56:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.954 23:56:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:24.954 ************************************ 00:14:24.954 END TEST bdev_hello_world 00:14:24.954 ************************************ 00:14:24.954 23:56:03 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:24.954 23:56:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.954 23:56:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.954 23:56:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.212 ************************************ 00:14:25.212 START TEST bdev_bounds 00:14:25.212 ************************************ 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=486926 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 486926' 00:14:25.212 Process bdevio pid: 486926 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 486926 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 486926 ']' 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.212 23:56:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:25.212 [2024-12-09 23:56:03.575937] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:25.212 [2024-12-09 23:56:03.576107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486926 ] 00:14:25.212 [2024-12-09 23:56:03.715303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.470 [2024-12-09 23:56:03.824318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.470 [2024-12-09 23:56:03.824368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.470 [2024-12-09 23:56:03.824372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.728 [2024-12-09 23:56:04.098875] 'OCF_Core' volume operations registered 00:14:25.728 [2024-12-09 23:56:04.098935] 'OCF_Cache' volume operations registered 00:14:25.728 [2024-12-09 23:56:04.103552] 'OCF Composite' volume operations registered 00:14:25.728 [2024-12-09 23:56:04.108289] 'SPDK_block_device' volume operations registered 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:29.944 I/O targets: 00:14:29.944 Nvme0n1: 1953525168 blocks of 512 bytes (953870 MiB) 00:14:29.944 00:14:29.944 00:14:29.944 CUnit - A unit testing framework for C - Version 2.1-3 00:14:29.944 http://cunit.sourceforge.net/ 00:14:29.944 00:14:29.944 00:14:29.944 Suite: bdevio tests on: Nvme0n1 00:14:29.944 Test: blockdev write read block ...passed 00:14:29.944 Test: blockdev write zeroes read block ...passed 00:14:29.944 Test: blockdev write zeroes read no split ...passed 00:14:29.944 Test: blockdev write zeroes read split ...passed 00:14:29.944 Test: blockdev write zeroes read split partial ...passed 00:14:29.944 Test: blockdev reset ...[2024-12-09 23:56:08.256981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:84:00.0, 0] resetting controller 00:14:29.944 [2024-12-09 23:56:08.259328] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:84:00.0, 0] Resetting controller successful. 00:14:29.944 passed 00:14:29.944 Test: blockdev write read 8 blocks ...passed 00:14:29.944 Test: blockdev write read size > 128k ...passed 00:14:29.944 Test: blockdev write read invalid size ...passed 00:14:29.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:29.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:29.944 Test: blockdev write read max offset ...passed 00:14:29.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:29.944 Test: blockdev writev readv 8 blocks ...passed 00:14:29.944 Test: blockdev writev readv 30 x 1block ...passed 00:14:29.944 Test: blockdev writev readv block ...passed 00:14:29.944 Test: blockdev writev readv size > 128k ...passed 00:14:29.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:29.944 Test: blockdev comparev and writev ...passed 00:14:29.944 Test: blockdev nvme passthru rw ...passed 00:14:29.944 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:56:08.285746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:889 PRP1 0x0 PRP2 0x0 00:14:29.944 [2024-12-09 23:56:08.285786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:889 cdw0:0 sqhd:0056 p:1 m:0 dnr:1 00:14:29.944 passed 00:14:29.944 Test: blockdev nvme admin passthru ...passed 00:14:29.944 Test: blockdev copy ...passed 00:14:29.944 00:14:29.944 Run Summary: Type Total Ran Passed Failed Inactive 00:14:29.944 suites 1 1 n/a 0 0 00:14:29.944 tests 23 23 23 0 0 00:14:29.944 asserts 140 140 140 0 n/a 00:14:29.944 00:14:29.944 Elapsed time = 0.132 seconds 00:14:29.944 0 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 486926 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 486926 ']' 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 486926 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486926 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486926' 00:14:29.944 killing process with pid 486926 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 486926 00:14:29.944 23:56:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 486926 00:14:31.841 23:56:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:31.841 00:14:31.841 real 0m6.583s 00:14:31.841 user 0m18.847s 00:14:31.841 sys 0m0.953s 00:14:31.841 23:56:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.841 23:56:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 ************************************ 00:14:31.841 END TEST bdev_bounds 00:14:31.841 ************************************ 00:14:31.841 23:56:10 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 '' 00:14:31.841 23:56:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:31.841 23:56:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.841 23:56:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 ************************************ 00:14:31.841 START TEST bdev_nbd 00:14:31.841 ************************************ 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 '' 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=487622 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 487622 /var/tmp/spdk-nbd.sock 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 487622 ']' 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:31.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.841 23:56:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:31.841 [2024-12-09 23:56:10.174447] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:31.841 [2024-12-09 23:56:10.174518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.841 [2024-12-09 23:56:10.244469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.841 [2024-12-09 23:56:10.305218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.407 [2024-12-09 23:56:10.658648] 'OCF_Core' volume operations registered 00:14:32.407 [2024-12-09 23:56:10.658738] 'OCF_Cache' volume operations registered 00:14:32.407 [2024-12-09 23:56:10.666075] 'OCF Composite' volume operations registered 00:14:32.407 [2024-12-09 23:56:10.673394] 'SPDK_block_device' volume operations registered 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:36.587 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.588 1+0 records in 00:14:36.588 1+0 records out 00:14:36.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283503 s, 14.4 MB/s 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:14:36.588 23:56:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:36.845 { 00:14:36.845 "nbd_device": "/dev/nbd0", 00:14:36.845 "bdev_name": "Nvme0n1" 00:14:36.845 } 00:14:36.845 ]' 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:36.845 { 00:14:36.845 "nbd_device": "/dev/nbd0", 00:14:36.845 "bdev_name": "Nvme0n1" 00:14:36.845 } 00:14:36.845 ]' 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.845 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:37.777 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:37.778 23:56:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.343 23:56:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:14:38.908 /dev/nbd0 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.908 1+0 records in 00:14:38.908 1+0 records out 00:14:38.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275895 s, 14.8 MB/s 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.908 23:56:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:39.841 { 00:14:39.841 "nbd_device": "/dev/nbd0", 00:14:39.841 "bdev_name": "Nvme0n1" 00:14:39.841 } 00:14:39.841 ]' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:39.841 { 00:14:39.841 "nbd_device": "/dev/nbd0", 00:14:39.841 "bdev_name": "Nvme0n1" 00:14:39.841 } 00:14:39.841 ]' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:39.841 256+0 records in 00:14:39.841 256+0 records out 00:14:39.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542766 s, 193 MB/s 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:39.841 256+0 records in 00:14:39.841 256+0 records out 00:14:39.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205916 s, 50.9 MB/s 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.841 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.100 23:56:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.666 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.666 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.666 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:40.924 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:41.490 malloc_lvol_verify 00:14:41.490 23:56:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:41.747 f47ce74c-a795-4506-ab90-14c224572a1e 00:14:41.748 23:56:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:42.313 413af1f1-b93a-4ab5-aefa-fc7f6fe95a3f 00:14:42.313 23:56:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:42.878 /dev/nbd0 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:43.137 mke2fs 1.47.0 (5-Feb-2023) 00:14:43.137 Discarding device blocks: 0/4096 done 00:14:43.137 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:43.137 00:14:43.137 Allocating group tables: 0/1 done 00:14:43.137 Writing inode tables: 0/1 done 00:14:43.137 Creating journal (1024 blocks): done 00:14:43.137 Writing superblocks and filesystem accounting information: 0/1 done 00:14:43.137 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.137 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 487622 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 487622 ']' 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 487622 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487622 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487622' 00:14:43.394 killing process with pid 487622 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 487622 00:14:43.394 23:56:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 487622 00:14:45.291 23:56:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:45.291 00:14:45.291 real 0m13.474s 00:14:45.291 user 0m18.112s 00:14:45.291 sys 0m2.854s 00:14:45.291 23:56:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.291 23:56:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 ************************************ 00:14:45.291 END TEST bdev_nbd 00:14:45.291 ************************************ 00:14:45.291 23:56:23 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:14:45.291 23:56:23 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:14:45.291 23:56:23 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:45.291 skipping fio tests on NVMe due to multi-ns failures. 00:14:45.291 23:56:23 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:45.291 23:56:23 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:45.291 23:56:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:45.291 23:56:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.291 23:56:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.291 ************************************ 00:14:45.291 START TEST bdev_verify 00:14:45.291 ************************************ 00:14:45.291 23:56:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:45.291 [2024-12-09 23:56:23.737334] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:45.291 [2024-12-09 23:56:23.737488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489297 ] 00:14:45.549 [2024-12-09 23:56:23.888566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.549 [2024-12-09 23:56:24.002128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.549 [2024-12-09 23:56:24.002132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.807 [2024-12-09 23:56:24.269599] 'OCF_Core' volume operations registered 00:14:45.807 [2024-12-09 23:56:24.269648] 'OCF_Cache' volume operations registered 00:14:45.807 [2024-12-09 23:56:24.274198] 'OCF Composite' volume operations registered 00:14:45.807 [2024-12-09 23:56:24.278557] 'SPDK_block_device' volume operations registered 00:14:49.085 Running I/O for 5 seconds... 00:14:51.023 25321.00 IOPS, 98.91 MiB/s [2024-12-09T22:56:30.476Z] 26592.50 IOPS, 103.88 MiB/s [2024-12-09T22:56:31.408Z] 27373.67 IOPS, 106.93 MiB/s [2024-12-09T22:56:32.340Z] 27250.50 IOPS, 106.45 MiB/s [2024-12-09T22:56:32.340Z] 27280.60 IOPS, 106.56 MiB/s 00:14:53.820 Latency(us) 00:14:53.820 [2024-12-09T22:56:32.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.820 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:53.820 Verification LBA range: start 0x0 length 0x74706db 00:14:53.820 Nvme0n1 : 5.01 13636.44 53.27 0.00 0.00 9332.21 81.92 12815.93 00:14:53.820 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:53.820 Verification LBA range: start 0x74706db length 0x74706db 00:14:53.820 Nvme0n1 : 5.01 13614.08 53.18 0.00 0.00 9347.33 73.96 12913.02 00:14:53.820 [2024-12-09T22:56:32.340Z] =================================================================================================================== 00:14:53.820 [2024-12-09T22:56:32.340Z] Total : 27250.52 106.45 0.00 0.00 9339.76 73.96 12913.02 00:14:55.770 00:14:55.770 real 0m10.253s 00:14:55.770 user 0m18.535s 00:14:55.770 sys 0m0.695s 00:14:55.770 23:56:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.770 23:56:33 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 ************************************ 00:14:55.770 END TEST bdev_verify 00:14:55.770 ************************************ 00:14:55.770 23:56:33 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:55.770 23:56:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:55.770 23:56:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.770 23:56:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 ************************************ 00:14:55.770 START TEST bdev_verify_big_io 00:14:55.770 ************************************ 00:14:55.770 23:56:33 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:55.770 [2024-12-09 23:56:34.057074] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:14:55.770 [2024-12-09 23:56:34.057231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490493 ] 00:14:55.770 [2024-12-09 23:56:34.213400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.036 [2024-12-09 23:56:34.305063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.036 [2024-12-09 23:56:34.305067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.308 [2024-12-09 23:56:34.572373] 'OCF_Core' volume operations registered 00:14:56.308 [2024-12-09 23:56:34.572424] 'OCF_Cache' volume operations registered 00:14:56.308 [2024-12-09 23:56:34.576882] 'OCF Composite' volume operations registered 00:14:56.308 [2024-12-09 23:56:34.581333] 'SPDK_block_device' volume operations registered 00:14:58.987 Running I/O for 5 seconds... 00:15:01.298 1523.00 IOPS, 95.19 MiB/s [2024-12-09T22:56:40.753Z] 1695.00 IOPS, 105.94 MiB/s [2024-12-09T22:56:42.127Z] 1697.33 IOPS, 106.08 MiB/s [2024-12-09T22:56:42.693Z] 1742.00 IOPS, 108.88 MiB/s 00:15:04.173 Latency(us) 00:15:04.173 [2024-12-09T22:56:42.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.173 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.173 Verification LBA range: start 0x0 length 0x74706d 00:15:04.173 Nvme0n1 : 5.03 833.98 52.12 0.00 0.00 149646.87 2572.89 160004.93 00:15:04.173 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.173 Verification LBA range: start 0x74706d length 0x74706d 00:15:04.173 Nvme0n1 : 5.02 862.72 53.92 0.00 0.00 144875.50 679.63 150684.25 00:15:04.173 [2024-12-09T22:56:42.693Z] =================================================================================================================== 00:15:04.173 [2024-12-09T22:56:42.693Z] Total : 1696.70 106.04 0.00 0.00 147223.14 679.63 160004.93 00:15:06.072 00:15:06.072 real 0m10.205s 00:15:06.072 user 0m18.469s 00:15:06.072 sys 0m0.686s 00:15:06.073 23:56:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.073 23:56:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.073 ************************************ 00:15:06.073 END TEST bdev_verify_big_io 00:15:06.073 ************************************ 00:15:06.073 23:56:44 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:06.073 23:56:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:06.073 23:56:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.073 23:56:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.073 ************************************ 00:15:06.073 START TEST bdev_write_zeroes 00:15:06.073 ************************************ 00:15:06.073 23:56:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:06.073 [2024-12-09 23:56:44.312737] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:06.073 [2024-12-09 23:56:44.312935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491651 ] 00:15:06.073 [2024-12-09 23:56:44.466859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.073 [2024-12-09 23:56:44.573361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.644 [2024-12-09 23:56:44.958703] 'OCF_Core' volume operations registered 00:15:06.644 [2024-12-09 23:56:44.958812] 'OCF_Cache' volume operations registered 00:15:06.644 [2024-12-09 23:56:44.966456] 'OCF Composite' volume operations registered 00:15:06.644 [2024-12-09 23:56:44.975716] 'SPDK_block_device' volume operations registered 00:15:09.927 Running I/O for 1 seconds... 00:15:10.494 30720.00 IOPS, 120.00 MiB/s 00:15:10.494 Latency(us) 00:15:10.494 [2024-12-09T22:56:49.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.494 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:10.494 Nvme0n1 : 1.01 30776.90 120.22 0.00 0.00 4141.90 673.56 5971.06 00:15:10.494 [2024-12-09T22:56:49.014Z] =================================================================================================================== 00:15:10.494 [2024-12-09T22:56:49.014Z] Total : 30776.90 120.22 0.00 0.00 4141.90 673.56 5971.06 00:15:12.394 00:15:12.394 real 0m6.389s 00:15:12.394 user 0m4.830s 00:15:12.394 sys 0m0.790s 00:15:12.394 23:56:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.394 23:56:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 ************************************ 00:15:12.394 END TEST bdev_write_zeroes 00:15:12.394 ************************************ 00:15:12.394 23:56:50 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.394 23:56:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:12.394 23:56:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.394 23:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 ************************************ 00:15:12.394 START TEST bdev_json_nonenclosed 00:15:12.394 ************************************ 00:15:12.394 23:56:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.394 [2024-12-09 23:56:50.753000] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:12.394 [2024-12-09 23:56:50.753148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492374 ] 00:15:12.394 [2024-12-09 23:56:50.903933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.652 [2024-12-09 23:56:51.011749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.652 [2024-12-09 23:56:51.011945] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:12.652 [2024-12-09 23:56:51.012000] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:12.652 [2024-12-09 23:56:51.012030] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:12.652 00:15:12.652 real 0m0.443s 00:15:12.652 user 0m0.300s 00:15:12.652 sys 0m0.138s 00:15:12.652 23:56:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.652 23:56:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:12.652 ************************************ 00:15:12.652 END TEST bdev_json_nonenclosed 00:15:12.652 ************************************ 00:15:12.652 23:56:51 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.652 23:56:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:12.652 23:56:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.652 23:56:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.910 ************************************ 00:15:12.910 START TEST bdev_json_nonarray 00:15:12.910 ************************************ 00:15:12.910 23:56:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.910 [2024-12-09 23:56:51.228301] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:12.910 [2024-12-09 23:56:51.228399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492423 ] 00:15:12.910 [2024-12-09 23:56:51.337050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.169 [2024-12-09 23:56:51.437550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.169 [2024-12-09 23:56:51.437745] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:13.169 [2024-12-09 23:56:51.437825] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:13.169 [2024-12-09 23:56:51.437838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:13.169 00:15:13.169 real 0m0.397s 00:15:13.169 user 0m0.263s 00:15:13.169 sys 0m0.130s 00:15:13.169 23:56:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.169 23:56:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:13.169 ************************************ 00:15:13.169 END TEST bdev_json_nonarray 00:15:13.169 ************************************ 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:15:13.169 23:56:51 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:15:13.169 00:15:13.169 real 0m59.973s 00:15:13.169 user 1m28.630s 00:15:13.169 sys 0m8.562s 00:15:13.169 23:56:51 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.169 23:56:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.169 ************************************ 00:15:13.169 END TEST blockdev_nvme 00:15:13.169 ************************************ 00:15:13.169 23:56:51 -- spdk/autotest.sh@209 -- # uname -s 00:15:13.169 23:56:51 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:15:13.169 23:56:51 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt 00:15:13.169 23:56:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:13.169 23:56:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.169 23:56:51 -- common/autotest_common.sh@10 -- # set +x 00:15:13.169 ************************************ 00:15:13.169 START TEST blockdev_nvme_gpt 00:15:13.169 ************************************ 00:15:13.169 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt 00:15:13.428 * Looking for test storage... 00:15:13.428 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.428 23:56:51 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:13.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.428 --rc genhtml_branch_coverage=1 00:15:13.428 --rc genhtml_function_coverage=1 00:15:13.428 --rc genhtml_legend=1 00:15:13.428 --rc geninfo_all_blocks=1 00:15:13.428 --rc geninfo_unexecuted_blocks=1 00:15:13.428 00:15:13.428 ' 00:15:13.428 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:13.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.428 --rc genhtml_branch_coverage=1 00:15:13.428 --rc genhtml_function_coverage=1 00:15:13.428 --rc genhtml_legend=1 00:15:13.428 --rc geninfo_all_blocks=1 00:15:13.429 --rc geninfo_unexecuted_blocks=1 00:15:13.429 00:15:13.429 ' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:13.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.429 --rc genhtml_branch_coverage=1 00:15:13.429 --rc genhtml_function_coverage=1 00:15:13.429 --rc genhtml_legend=1 00:15:13.429 --rc geninfo_all_blocks=1 00:15:13.429 --rc geninfo_unexecuted_blocks=1 00:15:13.429 00:15:13.429 ' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:13.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.429 --rc genhtml_branch_coverage=1 00:15:13.429 --rc genhtml_function_coverage=1 00:15:13.429 --rc genhtml_legend=1 00:15:13.429 --rc geninfo_all_blocks=1 00:15:13.429 --rc geninfo_unexecuted_blocks=1 00:15:13.429 00:15:13.429 ' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=492598 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:13.429 23:56:51 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 492598 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 492598 ']' 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.429 23:56:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:13.688 [2024-12-09 23:56:52.023958] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:13.688 [2024-12-09 23:56:52.024087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492598 ] 00:15:13.688 [2024-12-09 23:56:52.134013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.946 [2024-12-09 23:56:52.227009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.205 [2024-12-09 23:56:52.560218] 'OCF_Core' volume operations registered 00:15:14.205 [2024-12-09 23:56:52.560313] 'OCF_Cache' volume operations registered 00:15:14.205 [2024-12-09 23:56:52.568037] 'OCF Composite' volume operations registered 00:15:14.205 [2024-12-09 23:56:52.575885] 'SPDK_block_device' volume operations registered 00:15:14.463 23:56:52 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.463 23:56:52 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:15:14.463 23:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:14.463 23:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:15:14.463 23:56:52 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:15:15.837 Waiting for block devices as requested 00:15:15.837 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:15:16.097 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:15:16.097 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:15:16.355 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:15:16.355 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:15:16.355 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:15:16.613 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:15:16.613 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:15:16.613 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:15:16.613 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:15:16.871 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:15:16.871 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:15:16.871 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:15:17.129 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:15:17.129 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:15:17.129 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:15:17.129 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:84:00.0 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:15:17.387 BYT; 00:15:17.387 /dev/nvme0n1:1000GB:nvme:512:512:unknown:INTEL SSDPE2KX010T8:;' 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:15:17.387 BYT; 00:15:17.387 /dev/nvme0n1:1000GB:nvme:512:512:unknown:INTEL SSDPE2KX010T8:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:15:17.387 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]] 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:15:17.387 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:15:17.388 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:15:17.388 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:15:17.388 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:15:17.388 23:56:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:15:17.388 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:15:17.388 23:56:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:15:18.321 The operation has completed successfully. 00:15:18.321 23:56:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:15:19.696 The operation has completed successfully. 00:15:19.696 23:56:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:15:21.070 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:15:21.070 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:15:21.070 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:15:22.005 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:15:22.005 23:57:00 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:15:22.005 23:57:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.005 23:57:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:22.005 [] 00:15:22.005 23:57:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.005 23:57:00 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:15:22.005 23:57:00 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:15:22.005 23:57:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:15:22.005 23:57:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:15:22.005 23:57:00 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:84:00.0" } } ] }'\''' 00:15:22.005 23:57:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.005 23:57:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 512,' ' "num_blocks": 976760832,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 2048,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 512,' ' "num_blocks": 976760831,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 976762880,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1p1 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:25.289 23:57:03 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 492598 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 492598 ']' 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 492598 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492598 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492598' 00:15:25.289 killing process with pid 492598 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 492598 00:15:25.289 23:57:03 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 492598 00:15:27.190 23:57:05 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:27.190 23:57:05 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:15:27.190 23:57:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:27.190 23:57:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.190 23:57:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:27.190 ************************************ 00:15:27.190 START TEST bdev_hello_world 00:15:27.190 ************************************ 00:15:27.190 23:57:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:15:27.190 [2024-12-09 23:57:05.539925] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:27.190 [2024-12-09 23:57:05.540079] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495357 ] 00:15:27.190 [2024-12-09 23:57:05.682375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.449 [2024-12-09 23:57:05.786688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.709 [2024-12-09 23:57:06.148865] 'OCF_Core' volume operations registered 00:15:27.709 [2024-12-09 23:57:06.148914] 'OCF_Cache' volume operations registered 00:15:27.709 [2024-12-09 23:57:06.155860] 'OCF Composite' volume operations registered 00:15:27.709 [2024-12-09 23:57:06.162917] 'SPDK_block_device' volume operations registered 00:15:31.046 [2024-12-09 23:57:09.043966] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:31.046 [2024-12-09 23:57:09.044046] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:15:31.046 [2024-12-09 23:57:09.044094] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:31.046 [2024-12-09 23:57:09.049156] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:31.046 [2024-12-09 23:57:09.049540] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:31.046 [2024-12-09 23:57:09.049599] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:31.046 [2024-12-09 23:57:09.052411] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:31.046 00:15:31.046 [2024-12-09 23:57:09.052479] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:32.420 00:15:32.420 real 0m5.335s 00:15:32.420 user 0m3.798s 00:15:32.420 sys 0m0.763s 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:32.420 ************************************ 00:15:32.420 END TEST bdev_hello_world 00:15:32.420 ************************************ 00:15:32.420 23:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:32.420 23:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:32.420 23:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.420 23:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:32.420 ************************************ 00:15:32.420 START TEST bdev_bounds 00:15:32.420 ************************************ 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=496517 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 496517' 00:15:32.420 Process bdevio pid: 496517 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 496517 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 496517 ']' 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.420 23:57:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:32.420 [2024-12-09 23:57:10.907476] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:32.420 [2024-12-09 23:57:10.907563] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496517 ] 00:15:32.678 [2024-12-09 23:57:10.989165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.678 [2024-12-09 23:57:11.048205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.678 [2024-12-09 23:57:11.048258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.678 [2024-12-09 23:57:11.048262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.936 [2024-12-09 23:57:11.319487] 'OCF_Core' volume operations registered 00:15:32.936 [2024-12-09 23:57:11.319539] 'OCF_Cache' volume operations registered 00:15:32.936 [2024-12-09 23:57:11.324236] 'OCF Composite' volume operations registered 00:15:32.936 [2024-12-09 23:57:11.328936] 'SPDK_block_device' volume operations registered 00:15:37.122 23:57:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.122 23:57:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:37.122 23:57:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:37.122 I/O targets: 00:15:37.122 Nvme0n1p1: 976760832 blocks of 512 bytes (476934 MiB) 00:15:37.122 Nvme0n1p2: 976760831 blocks of 512 bytes (476934 MiB) 00:15:37.122 00:15:37.122 00:15:37.122 CUnit - A unit testing framework for C - Version 2.1-3 00:15:37.122 http://cunit.sourceforge.net/ 00:15:37.122 00:15:37.122 00:15:37.122 Suite: bdevio tests on: Nvme0n1p2 00:15:37.122 Test: blockdev write read block ...passed 00:15:37.122 Test: blockdev write zeroes read block ...passed 00:15:37.122 Test: blockdev write zeroes read no split ...passed 00:15:37.122 Test: blockdev write zeroes read split ...passed 00:15:37.122 Test: blockdev write zeroes read split partial ...passed 00:15:37.122 Test: blockdev reset ...[2024-12-09 23:57:15.177616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:84:00.0, 0] resetting controller 00:15:37.122 [2024-12-09 23:57:15.180068] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:84:00.0, 0] Resetting controller successful. 00:15:37.122 passed 00:15:37.122 Test: blockdev write read 8 blocks ...passed 00:15:37.122 Test: blockdev write read size > 128k ...passed 00:15:37.122 Test: blockdev write read invalid size ...passed 00:15:37.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:37.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:37.122 Test: blockdev write read max offset ...passed 00:15:37.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:37.122 Test: blockdev writev readv 8 blocks ...passed 00:15:37.122 Test: blockdev writev readv 30 x 1block ...passed 00:15:37.122 Test: blockdev writev readv block ...passed 00:15:37.122 Test: blockdev writev readv size > 128k ...passed 00:15:37.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:37.122 Test: blockdev comparev and writev ...passed 00:15:37.122 Test: blockdev nvme passthru rw ...passed 00:15:37.122 Test: blockdev nvme passthru vendor specific ...passed 00:15:37.122 Test: blockdev nvme admin passthru ...passed 00:15:37.122 Test: blockdev copy ...passed 00:15:37.122 Suite: bdevio tests on: Nvme0n1p1 00:15:37.122 Test: blockdev write read block ...passed 00:15:37.122 Test: blockdev write zeroes read block ...passed 00:15:37.122 Test: blockdev write zeroes read no split ...passed 00:15:37.122 Test: blockdev write zeroes read split ...passed 00:15:37.122 Test: blockdev write zeroes read split partial ...passed 00:15:37.123 Test: blockdev reset ...[2024-12-09 23:57:15.249217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:84:00.0, 0] resetting controller 00:15:37.123 [2024-12-09 23:57:15.251376] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:84:00.0, 0] Resetting controller successful. 00:15:37.123 passed 00:15:37.123 Test: blockdev write read 8 blocks ...passed 00:15:37.123 Test: blockdev write read size > 128k ...passed 00:15:37.123 Test: blockdev write read invalid size ...passed 00:15:37.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:37.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:37.123 Test: blockdev write read max offset ...passed 00:15:37.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:37.123 Test: blockdev writev readv 8 blocks ...passed 00:15:37.123 Test: blockdev writev readv 30 x 1block ...passed 00:15:37.123 Test: blockdev writev readv block ...passed 00:15:37.123 Test: blockdev writev readv size > 128k ...passed 00:15:37.123 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:37.123 Test: blockdev comparev and writev ...passed 00:15:37.123 Test: blockdev nvme passthru rw ...passed 00:15:37.123 Test: blockdev nvme passthru vendor specific ...passed 00:15:37.123 Test: blockdev nvme admin passthru ...passed 00:15:37.123 Test: blockdev copy ...passed 00:15:37.123 00:15:37.123 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.123 suites 2 2 n/a 0 0 00:15:37.123 tests 46 46 46 0 0 00:15:37.123 asserts 260 260 260 0 n/a 00:15:37.123 00:15:37.123 Elapsed time = 0.261 seconds 00:15:37.123 0 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 496517 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 496517 ']' 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 496517 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 496517 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 496517' 00:15:37.123 killing process with pid 496517 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 496517 00:15:37.123 23:57:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 496517 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:39.026 00:15:39.026 real 0m6.230s 00:15:39.026 user 0m17.662s 00:15:39.026 sys 0m0.874s 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 ************************************ 00:15:39.026 END TEST bdev_bounds 00:15:39.026 ************************************ 00:15:39.026 23:57:17 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:15:39.026 23:57:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:39.026 23:57:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.026 23:57:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:39.026 ************************************ 00:15:39.026 START TEST bdev_nbd 00:15:39.026 ************************************ 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:39.026 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=497209 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 497209 /var/tmp/spdk-nbd.sock 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 497209 ']' 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:39.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.027 23:57:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:39.027 [2024-12-09 23:57:17.222674] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:39.027 [2024-12-09 23:57:17.222862] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.027 [2024-12-09 23:57:17.357737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.027 [2024-12-09 23:57:17.452357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.592 [2024-12-09 23:57:17.843972] 'OCF_Core' volume operations registered 00:15:39.592 [2024-12-09 23:57:17.844018] 'OCF_Cache' volume operations registered 00:15:39.592 [2024-12-09 23:57:17.852710] 'OCF Composite' volume operations registered 00:15:39.592 [2024-12-09 23:57:17.861121] 'SPDK_block_device' volume operations registered 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:15:43.780 23:57:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.780 1+0 records in 00:15:43.780 1+0 records out 00:15:43.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286006 s, 14.3 MB/s 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:15:43.780 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.037 1+0 records in 00:15:44.037 1+0 records out 00:15:44.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540646 s, 7.6 MB/s 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:15:44.037 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:44.603 { 00:15:44.603 "nbd_device": "/dev/nbd0", 00:15:44.603 "bdev_name": "Nvme0n1p1" 00:15:44.603 }, 00:15:44.603 { 00:15:44.603 "nbd_device": "/dev/nbd1", 00:15:44.603 "bdev_name": "Nvme0n1p2" 00:15:44.603 } 00:15:44.603 ]' 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:44.603 { 00:15:44.603 "nbd_device": "/dev/nbd0", 00:15:44.603 "bdev_name": "Nvme0n1p1" 00:15:44.603 }, 00:15:44.603 { 00:15:44.603 "nbd_device": "/dev/nbd1", 00:15:44.603 "bdev_name": "Nvme0n1p2" 00:15:44.603 } 00:15:44.603 ]' 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.603 23:57:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.170 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.428 23:57:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.995 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:15:46.562 /dev/nbd0 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.562 1+0 records in 00:15:46.562 1+0 records out 00:15:46.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367457 s, 11.1 MB/s 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.562 23:57:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:15:46.820 /dev/nbd1 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.820 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.820 1+0 records in 00:15:46.820 1+0 records out 00:15:46.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480293 s, 8.5 MB/s 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.078 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:47.335 { 00:15:47.335 "nbd_device": "/dev/nbd0", 00:15:47.335 "bdev_name": "Nvme0n1p1" 00:15:47.335 }, 00:15:47.335 { 00:15:47.335 "nbd_device": "/dev/nbd1", 00:15:47.335 "bdev_name": "Nvme0n1p2" 00:15:47.335 } 00:15:47.335 ]' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:47.335 { 00:15:47.335 "nbd_device": "/dev/nbd0", 00:15:47.335 "bdev_name": "Nvme0n1p1" 00:15:47.335 }, 00:15:47.335 { 00:15:47.335 "nbd_device": "/dev/nbd1", 00:15:47.335 "bdev_name": "Nvme0n1p2" 00:15:47.335 } 00:15:47.335 ]' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:47.335 /dev/nbd1' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:47.335 /dev/nbd1' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:47.335 256+0 records in 00:15:47.335 256+0 records out 00:15:47.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00892252 s, 118 MB/s 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.335 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:47.594 256+0 records in 00:15:47.594 256+0 records out 00:15:47.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315237 s, 33.3 MB/s 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:47.594 256+0 records in 00:15:47.594 256+0 records out 00:15:47.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369974 s, 28.3 MB/s 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.594 23:57:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.852 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.110 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:48.677 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:48.677 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:48.677 23:57:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:48.677 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:48.677 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:48.677 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:48.677 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:48.678 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:49.244 malloc_lvol_verify 00:15:49.244 23:57:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:49.811 2dc45eb9-aaa5-4dee-a4a3-c7f2eb75b787 00:15:49.811 23:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:50.378 2247b4d1-de16-4f0e-ab1d-02fa18bc73f3 00:15:50.378 23:57:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:50.944 /dev/nbd0 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:50.944 mke2fs 1.47.0 (5-Feb-2023) 00:15:50.944 Discarding device blocks: 0/4096 done 00:15:50.944 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:50.944 00:15:50.944 Allocating group tables: 0/1 done 00:15:50.944 Writing inode tables: 0/1 done 00:15:50.944 Creating journal (1024 blocks): done 00:15:50.944 Writing superblocks and filesystem accounting information: 0/1 done 00:15:50.944 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.944 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 497209 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 497209 ']' 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 497209 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 497209 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 497209' 00:15:51.203 killing process with pid 497209 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 497209 00:15:51.203 23:57:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 497209 00:15:53.104 23:57:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:53.104 00:15:53.104 real 0m14.224s 00:15:53.104 user 0m18.976s 00:15:53.104 sys 0m3.570s 00:15:53.104 23:57:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.104 23:57:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:53.104 ************************************ 00:15:53.104 END TEST bdev_nbd 00:15:53.104 ************************************ 00:15:53.104 23:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:53.104 23:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:15:53.104 23:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:15:53.104 23:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:15:53.104 skipping fio tests on NVMe due to multi-ns failures. 00:15:53.104 23:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:53.104 23:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:53.104 23:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:53.104 23:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.104 23:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:53.104 ************************************ 00:15:53.104 START TEST bdev_verify 00:15:53.104 ************************************ 00:15:53.104 23:57:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:53.104 [2024-12-09 23:57:31.518924] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:15:53.104 [2024-12-09 23:57:31.519078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499034 ] 00:15:53.364 [2024-12-09 23:57:31.638842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.364 [2024-12-09 23:57:31.736982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.364 [2024-12-09 23:57:31.736985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.622 [2024-12-09 23:57:32.004240] 'OCF_Core' volume operations registered 00:15:53.622 [2024-12-09 23:57:32.004292] 'OCF_Cache' volume operations registered 00:15:53.622 [2024-12-09 23:57:32.008739] 'OCF Composite' volume operations registered 00:15:53.622 [2024-12-09 23:57:32.013269] 'SPDK_block_device' volume operations registered 00:15:56.904 Running I/O for 5 seconds... 00:15:58.404 29696.00 IOPS, 116.00 MiB/s [2024-12-09T22:57:38.299Z] 29952.00 IOPS, 117.00 MiB/s [2024-12-09T22:57:39.233Z] 30037.33 IOPS, 117.33 MiB/s [2024-12-09T22:57:40.168Z] 29984.00 IOPS, 117.12 MiB/s [2024-12-09T22:57:40.168Z] 30028.80 IOPS, 117.30 MiB/s 00:16:01.648 Latency(us) 00:16:01.648 [2024-12-09T22:57:40.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.648 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.648 Verification LBA range: start 0x0 length 0x3a38300 00:16:01.648 Nvme0n1p1 : 5.01 7484.90 29.24 0.00 0.00 17055.77 2536.49 14660.65 00:16:01.648 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.648 Verification LBA range: start 0x3a38300 length 0x3a38300 00:16:01.648 Nvme0n1p1 : 5.01 7485.22 29.24 0.00 0.00 17054.24 2669.99 15631.55 00:16:01.648 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:01.648 Verification LBA range: start 0x0 length 0x3a382ff 00:16:01.648 Nvme0n1p2 : 5.02 7472.52 29.19 0.00 0.00 17057.39 922.36 16505.36 00:16:01.648 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.648 Verification LBA range: start 0x3a382ff length 0x3a382ff 00:16:01.648 Nvme0n1p2 : 5.02 7489.24 29.25 0.00 0.00 17022.20 2500.08 15922.82 00:16:01.648 [2024-12-09T22:57:40.168Z] =================================================================================================================== 00:16:01.648 [2024-12-09T22:57:40.168Z] Total : 29931.88 116.92 0.00 0.00 17047.38 922.36 16505.36 00:16:03.550 00:16:03.550 real 0m10.217s 00:16:03.550 user 0m18.490s 00:16:03.550 sys 0m0.713s 00:16:03.550 23:57:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.550 23:57:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:03.550 ************************************ 00:16:03.550 END TEST bdev_verify 00:16:03.550 ************************************ 00:16:03.550 23:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:03.550 23:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:03.550 23:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.550 23:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:03.550 ************************************ 00:16:03.550 START TEST bdev_verify_big_io 00:16:03.550 ************************************ 00:16:03.550 23:57:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:03.550 [2024-12-09 23:57:41.748114] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:16:03.550 [2024-12-09 23:57:41.748198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500109 ] 00:16:03.550 [2024-12-09 23:57:41.845281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:03.550 [2024-12-09 23:57:41.943692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.550 [2024-12-09 23:57:41.943695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.808 [2024-12-09 23:57:42.217356] 'OCF_Core' volume operations registered 00:16:03.808 [2024-12-09 23:57:42.217400] 'OCF_Cache' volume operations registered 00:16:03.808 [2024-12-09 23:57:42.221991] 'OCF Composite' volume operations registered 00:16:03.808 [2024-12-09 23:57:42.226610] 'SPDK_block_device' volume operations registered 00:16:07.098 Running I/O for 5 seconds... 00:16:08.979 2048.00 IOPS, 128.00 MiB/s [2024-12-09T22:57:48.879Z] 2304.00 IOPS, 144.00 MiB/s [2024-12-09T22:57:49.817Z] 2436.67 IOPS, 152.29 MiB/s [2024-12-09T22:57:50.383Z] 2427.50 IOPS, 151.72 MiB/s 00:16:11.863 Latency(us) 00:16:11.863 [2024-12-09T22:57:50.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.863 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:11.863 Verification LBA range: start 0x0 length 0x3a3830 00:16:11.863 Nvme0n1p1 : 5.18 592.92 37.06 0.00 0.00 212716.10 6092.42 222142.77 00:16:11.863 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:11.863 Verification LBA range: start 0x3a3830 length 0x3a3830 00:16:11.864 Nvme0n1p1 : 5.20 582.63 36.41 0.00 0.00 216236.34 4733.16 225249.66 00:16:11.864 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:11.864 Verification LBA range: start 0x0 length 0x3a382f 00:16:11.864 Nvme0n1p2 : 5.19 591.82 36.99 0.00 0.00 208315.87 3349.62 215928.98 00:16:11.864 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:11.864 Verification LBA range: start 0x3a382f length 0x3a382f 00:16:11.864 Nvme0n1p2 : 5.20 570.44 35.65 0.00 0.00 215879.89 5606.97 223696.21 00:16:11.864 [2024-12-09T22:57:50.384Z] =================================================================================================================== 00:16:11.864 [2024-12-09T22:57:50.384Z] Total : 2337.82 146.11 0.00 0.00 213255.13 3349.62 225249.66 00:16:13.769 00:16:13.769 real 0m10.338s 00:16:13.769 user 0m18.839s 00:16:13.769 sys 0m0.668s 00:16:13.769 23:57:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.769 23:57:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.769 ************************************ 00:16:13.769 END TEST bdev_verify_big_io 00:16:13.769 ************************************ 00:16:13.769 23:57:52 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.769 23:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:13.769 23:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.769 23:57:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:13.769 ************************************ 00:16:13.769 START TEST bdev_write_zeroes 00:16:13.769 ************************************ 00:16:13.769 23:57:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.769 [2024-12-09 23:57:52.173877] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:16:13.769 [2024-12-09 23:57:52.173940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501297 ] 00:16:14.029 [2024-12-09 23:57:52.306124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.029 [2024-12-09 23:57:52.392681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.289 [2024-12-09 23:57:52.792198] 'OCF_Core' volume operations registered 00:16:14.289 [2024-12-09 23:57:52.792290] 'OCF_Cache' volume operations registered 00:16:14.289 [2024-12-09 23:57:52.800920] 'OCF Composite' volume operations registered 00:16:14.289 [2024-12-09 23:57:52.807851] 'SPDK_block_device' volume operations registered 00:16:17.581 Running I/O for 1 seconds... 00:16:18.521 25344.00 IOPS, 99.00 MiB/s 00:16:18.521 Latency(us) 00:16:18.521 [2024-12-09T22:57:57.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.521 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.521 Nvme0n1p1 : 1.02 12679.20 49.53 0.00 0.00 10064.26 4320.52 14660.65 00:16:18.521 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.521 Nvme0n1p2 : 1.02 12660.35 49.45 0.00 0.00 10049.57 2342.31 16117.00 00:16:18.521 [2024-12-09T22:57:57.041Z] =================================================================================================================== 00:16:18.521 [2024-12-09T22:57:57.041Z] Total : 25339.54 98.98 0.00 0.00 10056.91 2342.31 16117.00 00:16:20.431 00:16:20.431 real 0m6.360s 00:16:20.431 user 0m4.773s 00:16:20.431 sys 0m0.815s 00:16:20.431 23:57:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.431 23:57:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 ************************************ 00:16:20.431 END TEST bdev_write_zeroes 00:16:20.431 ************************************ 00:16:20.431 23:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.431 23:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:20.431 23:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.431 23:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 ************************************ 00:16:20.431 START TEST bdev_json_nonenclosed 00:16:20.431 ************************************ 00:16:20.431 23:57:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.431 [2024-12-09 23:57:58.544719] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:16:20.431 [2024-12-09 23:57:58.544806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502097 ] 00:16:20.431 [2024-12-09 23:57:58.652792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.431 [2024-12-09 23:57:58.759210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.431 [2024-12-09 23:57:58.759397] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:20.431 [2024-12-09 23:57:58.759452] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:20.431 [2024-12-09 23:57:58.759482] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:20.431 00:16:20.431 real 0m0.364s 00:16:20.431 user 0m0.252s 00:16:20.431 sys 0m0.110s 00:16:20.431 23:57:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.431 23:57:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 ************************************ 00:16:20.431 END TEST bdev_json_nonenclosed 00:16:20.431 ************************************ 00:16:20.431 23:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.431 23:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:20.431 23:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.431 23:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:20.431 ************************************ 00:16:20.431 START TEST bdev_json_nonarray 00:16:20.431 ************************************ 00:16:20.431 23:57:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.689 [2024-12-09 23:57:58.982289] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:16:20.689 [2024-12-09 23:57:58.982368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502121 ] 00:16:20.689 [2024-12-09 23:57:59.092926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.689 [2024-12-09 23:57:59.167685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.689 [2024-12-09 23:57:59.167822] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:20.689 [2024-12-09 23:57:59.167859] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:20.689 [2024-12-09 23:57:59.167871] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:20.947 00:16:20.947 real 0m0.311s 00:16:20.947 user 0m0.194s 00:16:20.947 sys 0m0.114s 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:20.947 ************************************ 00:16:20.947 END TEST bdev_json_nonarray 00:16:20.947 ************************************ 00:16:20.947 23:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:16:20.947 23:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:16:20.947 23:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:16:20.947 23:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.947 23:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.947 23:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:20.947 ************************************ 00:16:20.947 START TEST bdev_gpt_uuid 00:16:20.947 ************************************ 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=502218 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 502218 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 502218 ']' 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.947 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.948 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.948 23:57:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:20.948 [2024-12-09 23:57:59.360137] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:16:20.948 [2024-12-09 23:57:59.360240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502218 ] 00:16:20.948 [2024-12-09 23:57:59.463011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.207 [2024-12-09 23:57:59.564751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.467 [2024-12-09 23:57:59.941579] 'OCF_Core' volume operations registered 00:16:21.467 [2024-12-09 23:57:59.941624] 'OCF_Cache' volume operations registered 00:16:21.467 [2024-12-09 23:57:59.946320] 'OCF Composite' volume operations registered 00:16:21.467 [2024-12-09 23:57:59.950952] 'SPDK_block_device' volume operations registered 00:16:21.726 23:58:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.726 23:58:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:16:21.726 23:58:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:16:21.726 23:58:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.726 23:58:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:25.022 Some configs were skipped because the RPC state that can call them passed over. 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:16:25.022 { 00:16:25.022 "name": "Nvme0n1p1", 00:16:25.022 "aliases": [ 00:16:25.022 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:16:25.022 ], 00:16:25.022 "product_name": "GPT Disk", 00:16:25.022 "block_size": 512, 00:16:25.022 "num_blocks": 976760832, 00:16:25.022 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:16:25.022 "assigned_rate_limits": { 00:16:25.022 "rw_ios_per_sec": 0, 00:16:25.022 "rw_mbytes_per_sec": 0, 00:16:25.022 "r_mbytes_per_sec": 0, 00:16:25.022 "w_mbytes_per_sec": 0 00:16:25.022 }, 00:16:25.022 "claimed": false, 00:16:25.022 "zoned": false, 00:16:25.022 "supported_io_types": { 00:16:25.022 "read": true, 00:16:25.022 "write": true, 00:16:25.022 "unmap": true, 00:16:25.022 "flush": true, 00:16:25.022 "reset": true, 00:16:25.022 "nvme_admin": false, 00:16:25.022 "nvme_io": false, 00:16:25.022 "nvme_io_md": false, 00:16:25.022 "write_zeroes": true, 00:16:25.022 "zcopy": false, 00:16:25.022 "get_zone_info": false, 00:16:25.022 "zone_management": false, 00:16:25.022 "zone_append": false, 00:16:25.022 "compare": false, 00:16:25.022 "compare_and_write": false, 00:16:25.022 "abort": true, 00:16:25.022 "seek_hole": false, 00:16:25.022 "seek_data": false, 00:16:25.022 "copy": false, 00:16:25.022 "nvme_iov_md": false 00:16:25.022 }, 00:16:25.022 "driver_specific": { 00:16:25.022 "gpt": { 00:16:25.022 "base_bdev": "Nvme0n1", 00:16:25.022 "offset_blocks": 2048, 00:16:25.022 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:16:25.022 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:16:25.022 "partition_name": "SPDK_TEST_first" 00:16:25.022 } 00:16:25.022 } 00:16:25.022 } 00:16:25.022 ]' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:16:25.022 { 00:16:25.022 "name": "Nvme0n1p2", 00:16:25.022 "aliases": [ 00:16:25.022 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:16:25.022 ], 00:16:25.022 "product_name": "GPT Disk", 00:16:25.022 "block_size": 512, 00:16:25.022 "num_blocks": 976760831, 00:16:25.022 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:16:25.022 "assigned_rate_limits": { 00:16:25.022 "rw_ios_per_sec": 0, 00:16:25.022 "rw_mbytes_per_sec": 0, 00:16:25.022 "r_mbytes_per_sec": 0, 00:16:25.022 "w_mbytes_per_sec": 0 00:16:25.022 }, 00:16:25.022 "claimed": false, 00:16:25.022 "zoned": false, 00:16:25.022 "supported_io_types": { 00:16:25.022 "read": true, 00:16:25.022 "write": true, 00:16:25.022 "unmap": true, 00:16:25.022 "flush": true, 00:16:25.022 "reset": true, 00:16:25.022 "nvme_admin": false, 00:16:25.022 "nvme_io": false, 00:16:25.022 "nvme_io_md": false, 00:16:25.022 "write_zeroes": true, 00:16:25.022 "zcopy": false, 00:16:25.022 "get_zone_info": false, 00:16:25.022 "zone_management": false, 00:16:25.022 "zone_append": false, 00:16:25.022 "compare": false, 00:16:25.022 "compare_and_write": false, 00:16:25.022 "abort": true, 00:16:25.022 "seek_hole": false, 00:16:25.022 "seek_data": false, 00:16:25.022 "copy": false, 00:16:25.022 "nvme_iov_md": false 00:16:25.022 }, 00:16:25.022 "driver_specific": { 00:16:25.022 "gpt": { 00:16:25.022 "base_bdev": "Nvme0n1", 00:16:25.022 "offset_blocks": 976762880, 00:16:25.022 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:16:25.022 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:16:25.022 "partition_name": "SPDK_TEST_second" 00:16:25.022 } 00:16:25.022 } 00:16:25.022 } 00:16:25.022 ]' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 502218 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 502218 ']' 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 502218 00:16:25.022 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 502218 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 502218' 00:16:25.023 killing process with pid 502218 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 502218 00:16:25.023 23:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 502218 00:16:26.932 00:16:26.932 real 0m6.105s 00:16:26.932 user 0m4.974s 00:16:26.932 sys 0m1.086s 00:16:26.932 23:58:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.932 23:58:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:26.932 ************************************ 00:16:26.932 END TEST bdev_gpt_uuid 00:16:26.932 ************************************ 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:16:26.932 23:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:16:28.313 Waiting for block devices as requested 00:16:28.313 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:16:28.572 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:16:28.572 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:16:28.572 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:16:28.831 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:16:28.831 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:16:28.831 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:16:28.831 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:16:29.090 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:16:29.090 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:16:29.090 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:16:29.090 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:16:29.349 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:16:29.349 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:16:29.349 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:16:29.609 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:16:29.609 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:16:29.609 23:58:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:16:29.609 23:58:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:16:29.867 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:16:29.867 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:16:29.868 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:29.868 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:29.868 23:58:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:16:29.868 00:16:29.868 real 1m16.572s 00:16:29.868 user 1m42.342s 00:16:29.868 sys 0m13.923s 00:16:29.868 23:58:08 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.868 23:58:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:29.868 ************************************ 00:16:29.868 END TEST blockdev_nvme_gpt 00:16:29.868 ************************************ 00:16:29.868 23:58:08 -- spdk/autotest.sh@212 -- # run_test nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh 00:16:29.868 23:58:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.868 23:58:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.868 23:58:08 -- common/autotest_common.sh@10 -- # set +x 00:16:29.868 ************************************ 00:16:29.868 START TEST nvme 00:16:29.868 ************************************ 00:16:29.868 23:58:08 nvme -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh 00:16:29.868 * Looking for test storage... 00:16:29.868 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:16:29.868 23:58:08 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:29.868 23:58:08 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:16:29.868 23:58:08 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:30.127 23:58:08 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.127 23:58:08 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.127 23:58:08 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.127 23:58:08 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.127 23:58:08 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.127 23:58:08 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.127 23:58:08 nvme -- scripts/common.sh@344 -- # case "$op" in 00:16:30.127 23:58:08 nvme -- scripts/common.sh@345 -- # : 1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.127 23:58:08 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.127 23:58:08 nvme -- scripts/common.sh@365 -- # decimal 1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@353 -- # local d=1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.127 23:58:08 nvme -- scripts/common.sh@355 -- # echo 1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.127 23:58:08 nvme -- scripts/common.sh@366 -- # decimal 2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@353 -- # local d=2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.127 23:58:08 nvme -- scripts/common.sh@355 -- # echo 2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.127 23:58:08 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.127 23:58:08 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.127 23:58:08 nvme -- scripts/common.sh@368 -- # return 0 00:16:30.127 23:58:08 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.127 23:58:08 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:30.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.127 --rc genhtml_branch_coverage=1 00:16:30.127 --rc genhtml_function_coverage=1 00:16:30.127 --rc genhtml_legend=1 00:16:30.127 --rc geninfo_all_blocks=1 00:16:30.127 --rc geninfo_unexecuted_blocks=1 00:16:30.127 00:16:30.128 ' 00:16:30.128 23:58:08 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.128 --rc genhtml_branch_coverage=1 00:16:30.128 --rc genhtml_function_coverage=1 00:16:30.128 --rc genhtml_legend=1 00:16:30.128 --rc geninfo_all_blocks=1 00:16:30.128 --rc geninfo_unexecuted_blocks=1 00:16:30.128 00:16:30.128 ' 00:16:30.128 23:58:08 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.128 --rc genhtml_branch_coverage=1 00:16:30.128 --rc genhtml_function_coverage=1 00:16:30.128 --rc genhtml_legend=1 00:16:30.128 --rc geninfo_all_blocks=1 00:16:30.128 --rc geninfo_unexecuted_blocks=1 00:16:30.128 00:16:30.128 ' 00:16:30.128 23:58:08 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:30.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.128 --rc genhtml_branch_coverage=1 00:16:30.128 --rc genhtml_function_coverage=1 00:16:30.128 --rc genhtml_legend=1 00:16:30.128 --rc geninfo_all_blocks=1 00:16:30.128 --rc geninfo_unexecuted_blocks=1 00:16:30.128 00:16:30.128 ' 00:16:30.128 23:58:08 nvme -- nvme/nvme.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:16:31.506 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:16:31.506 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:16:31.506 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:16:32.444 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:16:32.444 23:58:10 nvme -- nvme/nvme.sh@79 -- # uname 00:16:32.444 23:58:10 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:16:32.444 23:58:10 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:16:32.444 23:58:10 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1075 -- # stubpid=504681 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1074 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:16:32.444 Waiting for stub to ready for secondary processes... 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/504681 ]] 00:16:32.444 23:58:10 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:32.444 [2024-12-09 23:58:10.837584] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:16:32.444 [2024-12-09 23:58:10.837667] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:16:33.380 23:58:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:33.380 23:58:11 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/504681 ]] 00:16:33.380 23:58:11 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:34.317 [2024-12-09 23:58:12.575814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.317 [2024-12-09 23:58:12.627493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.317 [2024-12-09 23:58:12.627553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.317 [2024-12-09 23:58:12.627556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.317 23:58:12 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:34.317 23:58:12 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/504681 ]] 00:16:34.317 23:58:12 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:35.697 23:58:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:35.697 23:58:13 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/504681 ]] 00:16:35.697 23:58:13 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:36.635 23:58:14 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:36.635 23:58:14 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/504681 ]] 00:16:36.635 23:58:14 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:37.205 [2024-12-09 23:58:15.625612] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:16:37.205 [2024-12-09 23:58:15.625652] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:16:37.205 [2024-12-09 23:58:15.635312] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:16:37.205 [2024-12-09 23:58:15.635376] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:16:37.466 23:58:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:37.466 23:58:15 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:16:37.466 done. 00:16:37.466 23:58:15 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:37.466 23:58:15 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:16:37.466 23:58:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.466 23:58:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.466 ************************************ 00:16:37.466 START TEST nvme_reset 00:16:37.466 ************************************ 00:16:37.466 23:58:15 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:37.727 [2024-12-09 23:58:16.189927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190249] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190386] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.190995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.727 [2024-12-09 23:58:16.191309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.203995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.204985] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.205004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.205022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:43.007 [2024-12-09 23:58:21.205040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.216998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.217986] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.218005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.218024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.218058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.289 [2024-12-09 23:58:26.218077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.576 Initializing NVMe Controllers 00:16:53.576 Associating INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) with lcore 0 00:16:53.576 Initialization complete. Launching workers. 00:16:53.576 Starting thread on core 0 00:16:53.576 ======================================================== 00:16:53.576 754240 IO completed successfully 00:16:53.576 64 IO completed with error 00:16:53.576 -------------------------------------------------------- 00:16:53.576 754304 IO completed total 00:16:53.576 754304 IO submitted 00:16:53.576 Starting thread on core 0 00:16:53.576 ======================================================== 00:16:53.576 756928 IO completed successfully 00:16:53.576 64 IO completed with error 00:16:53.576 -------------------------------------------------------- 00:16:53.576 756992 IO completed total 00:16:53.576 756992 IO submitted 00:16:53.576 Starting thread on core 0 00:16:53.576 ======================================================== 00:16:53.576 755264 IO completed successfully 00:16:53.576 64 IO completed with error 00:16:53.576 -------------------------------------------------------- 00:16:53.576 755328 IO completed total 00:16:53.576 755328 IO submitted 00:16:53.576 00:16:53.576 real 0m15.387s 00:16:53.576 user 0m15.080s 00:16:53.576 sys 0m0.195s 00:16:53.576 23:58:31 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.576 23:58:31 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:16:53.576 ************************************ 00:16:53.576 END TEST nvme_reset 00:16:53.576 ************************************ 00:16:53.576 23:58:31 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:16:53.576 23:58:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:53.576 23:58:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.576 23:58:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.576 ************************************ 00:16:53.576 START TEST nvme_identify 00:16:53.576 ************************************ 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:16:53.576 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:16:53.576 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:16:53.576 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:16:53.576 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:16:53.576 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:16:53.577 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -i 0 00:16:53.577 ===================================================== 00:16:53.577 NVMe Controller at 0000:84:00.0 [8086:0a54] 00:16:53.577 ===================================================== 00:16:53.577 Controller Capabilities/Features 00:16:53.577 ================================ 00:16:53.577 Vendor ID: 8086 00:16:53.577 Subsystem Vendor ID: 8086 00:16:53.577 Serial Number: BTLJ724400Z71P0FGN 00:16:53.577 Model Number: INTEL SSDPE2KX010T8 00:16:53.577 Firmware Version: VDV10184 00:16:53.577 Recommended Arb Burst: 0 00:16:53.577 IEEE OUI Identifier: e4 d2 5c 00:16:53.577 Multi-path I/O 00:16:53.577 May have multiple subsystem ports: No 00:16:53.577 May have multiple controllers: No 00:16:53.577 Associated with SR-IOV VF: No 00:16:53.577 Max Data Transfer Size: 131072 00:16:53.577 Max Number of Namespaces: 128 00:16:53.577 Max Number of I/O Queues: 128 00:16:53.577 NVMe Specification Version (VS): 1.2 00:16:53.577 NVMe Specification Version (Identify): 1.2 00:16:53.577 Maximum Queue Entries: 4096 00:16:53.577 Contiguous Queues Required: Yes 00:16:53.577 Arbitration Mechanisms Supported 00:16:53.577 Weighted Round Robin: Supported 00:16:53.577 Vendor Specific: Not Supported 00:16:53.577 Reset Timeout: 60000 ms 00:16:53.577 Doorbell Stride: 4 bytes 00:16:53.577 NVM Subsystem Reset: Not Supported 00:16:53.577 Command Sets Supported 00:16:53.577 NVM Command Set: Supported 00:16:53.577 Boot Partition: Not Supported 00:16:53.577 Memory Page Size Minimum: 4096 bytes 00:16:53.577 Memory Page Size Maximum: 4096 bytes 00:16:53.577 Persistent Memory Region: Not Supported 00:16:53.577 Optional Asynchronous Events Supported 00:16:53.577 Namespace Attribute Notices: Not Supported 00:16:53.577 Firmware Activation Notices: Supported 00:16:53.577 ANA Change Notices: Not Supported 00:16:53.577 PLE Aggregate Log Change Notices: Not Supported 00:16:53.577 LBA Status Info Alert Notices: Not Supported 00:16:53.577 EGE Aggregate Log Change Notices: Not Supported 00:16:53.577 Normal NVM Subsystem Shutdown event: Not Supported 00:16:53.577 Zone Descriptor Change Notices: Not Supported 00:16:53.577 Discovery Log Change Notices: Not Supported 00:16:53.577 Controller Attributes 00:16:53.577 128-bit Host Identifier: Not Supported 00:16:53.577 Non-Operational Permissive Mode: Not Supported 00:16:53.577 NVM Sets: Not Supported 00:16:53.577 Read Recovery Levels: Not Supported 00:16:53.577 Endurance Groups: Not Supported 00:16:53.577 Predictable Latency Mode: Not Supported 00:16:53.577 Traffic Based Keep ALive: Not Supported 00:16:53.577 Namespace Granularity: Not Supported 00:16:53.577 SQ Associations: Not Supported 00:16:53.577 UUID List: Not Supported 00:16:53.577 Multi-Domain Subsystem: Not Supported 00:16:53.577 Fixed Capacity Management: Not Supported 00:16:53.577 Variable Capacity Management: Not Supported 00:16:53.577 Delete Endurance Group: Not Supported 00:16:53.577 Delete NVM Set: Not Supported 00:16:53.577 Extended LBA Formats Supported: Not Supported 00:16:53.577 Flexible Data Placement Supported: Not Supported 00:16:53.577 00:16:53.577 Controller Memory Buffer Support 00:16:53.577 ================================ 00:16:53.577 Supported: No 00:16:53.577 00:16:53.577 Persistent Memory Region Support 00:16:53.577 ================================ 00:16:53.577 Supported: No 00:16:53.577 00:16:53.577 Admin Command Set Attributes 00:16:53.577 ============================ 00:16:53.577 Security Send/Receive: Not Supported 00:16:53.577 Format NVM: Supported 00:16:53.577 Firmware Activate/Download: Supported 00:16:53.577 Namespace Management: Supported 00:16:53.577 Device Self-Test: Not Supported 00:16:53.577 Directives: Not Supported 00:16:53.577 NVMe-MI: Not Supported 00:16:53.577 Virtualization Management: Not Supported 00:16:53.577 Doorbell Buffer Config: Not Supported 00:16:53.577 Get LBA Status Capability: Not Supported 00:16:53.577 Command & Feature Lockdown Capability: Not Supported 00:16:53.577 Abort Command Limit: 4 00:16:53.577 Async Event Request Limit: 4 00:16:53.577 Number of Firmware Slots: 2 00:16:53.577 Firmware Slot 1 Read-Only: No 00:16:53.577 Firmware Activation Without Reset: Yes 00:16:53.577 Multiple Update Detection Support: No 00:16:53.577 Firmware Update Granularity: No Information Provided 00:16:53.577 Per-Namespace SMART Log: No 00:16:53.577 Asymmetric Namespace Access Log Page: Not Supported 00:16:53.577 Subsystem NQN: 00:16:53.577 Command Effects Log Page: Supported 00:16:53.577 Get Log Page Extended Data: Supported 00:16:53.577 Telemetry Log Pages: Supported 00:16:53.577 Persistent Event Log Pages: Not Supported 00:16:53.577 Supported Log Pages Log Page: May Support 00:16:53.577 Commands Supported & Effects Log Page: Not Supported 00:16:53.577 Feature Identifiers & Effects Log Page:May Support 00:16:53.577 NVMe-MI Commands & Effects Log Page: May Support 00:16:53.577 Data Area 4 for Telemetry Log: Not Supported 00:16:53.577 Error Log Page Entries Supported: 64 00:16:53.577 Keep Alive: Not Supported 00:16:53.577 00:16:53.577 NVM Command Set Attributes 00:16:53.577 ========================== 00:16:53.577 Submission Queue Entry Size 00:16:53.577 Max: 64 00:16:53.577 Min: 64 00:16:53.577 Completion Queue Entry Size 00:16:53.577 Max: 16 00:16:53.577 Min: 16 00:16:53.577 Number of Namespaces: 128 00:16:53.577 Compare Command: Not Supported 00:16:53.577 Write Uncorrectable Command: Supported 00:16:53.577 Dataset Management Command: Supported 00:16:53.577 Write Zeroes Command: Not Supported 00:16:53.577 Set Features Save Field: Not Supported 00:16:53.577 Reservations: Not Supported 00:16:53.577 Timestamp: Not Supported 00:16:53.577 Copy: Not Supported 00:16:53.577 Volatile Write Cache: Not Present 00:16:53.577 Atomic Write Unit (Normal): 1 00:16:53.577 Atomic Write Unit (PFail): 1 00:16:53.577 Atomic Compare & Write Unit: 1 00:16:53.577 Fused Compare & Write: Not Supported 00:16:53.577 Scatter-Gather List 00:16:53.577 SGL Command Set: Not Supported 00:16:53.577 SGL Keyed: Not Supported 00:16:53.577 SGL Bit Bucket Descriptor: Not Supported 00:16:53.577 SGL Metadata Pointer: Not Supported 00:16:53.577 Oversized SGL: Not Supported 00:16:53.577 SGL Metadata Address: Not Supported 00:16:53.577 SGL Offset: Not Supported 00:16:53.577 Transport SGL Data Block: Not Supported 00:16:53.577 Replay Protected Memory Block: Not Supported 00:16:53.577 00:16:53.577 Firmware Slot Information 00:16:53.577 ========================= 00:16:53.577 Active slot: 1 00:16:53.577 Slot 1 Firmware Revision: VDV10184 00:16:53.577 00:16:53.577 00:16:53.577 Commands Supported and Effects 00:16:53.577 ============================== 00:16:53.577 Admin Commands 00:16:53.577 -------------- 00:16:53.577 Delete I/O Submission Queue (00h): Supported 00:16:53.577 Create I/O Submission Queue (01h): Supported All-NS-Exclusive 00:16:53.577 Get Log Page (02h): Supported 00:16:53.577 Delete I/O Completion Queue (04h): Supported 00:16:53.577 Create I/O Completion Queue (05h): Supported All-NS-Exclusive 00:16:53.577 Identify (06h): Supported 00:16:53.577 Abort (08h): Supported 00:16:53.577 Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 00:16:53.577 Get Features (0Ah): Supported 00:16:53.577 Asynchronous Event Request (0Ch): Supported 00:16:53.577 Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive 00:16:53.577 Firmware Commit (10h): Supported Ctrlr-Cap-Change 00:16:53.577 Firmware Image Download (11h): Supported 00:16:53.577 Namespace Attachment (15h): Supported Per-NS-Exclusive 00:16:53.577 Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive 00:16:53.577 Vendor specific (C8h): Supported 00:16:53.577 Vendor specific (D2h): Supported 00:16:53.577 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:53.577 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:53.577 I/O Commands 00:16:53.577 ------------ 00:16:53.577 Flush (00h): Supported LBA-Change 00:16:53.577 Write (01h): Supported LBA-Change 00:16:53.577 Read (02h): Supported 00:16:53.577 Write Uncorrectable (04h): Supported LBA-Change 00:16:53.577 Dataset Management (09h): Supported LBA-Change 00:16:53.577 00:16:53.577 Error Log 00:16:53.577 ========= 00:16:53.577 Entry: 0 00:16:53.577 Error Count: 0xf0d 00:16:53.577 Submission Queue Id: 0x2 00:16:53.577 Command Id: 0xffff 00:16:53.577 Phase Bit: 0 00:16:53.577 Status Code: 0x6 00:16:53.577 Status Code Type: 0x0 00:16:53.577 Do Not Retry: 1 00:16:53.577 Error Location: 0xffff 00:16:53.577 LBA: 0x0 00:16:53.577 Namespace: 0xffffffff 00:16:53.577 Vendor Log Page: 0x0 00:16:53.577 ----------- 00:16:53.577 Entry: 1 00:16:53.577 Error Count: 0xf0c 00:16:53.577 Submission Queue Id: 0x2 00:16:53.577 Command Id: 0xffff 00:16:53.577 Phase Bit: 0 00:16:53.577 Status Code: 0x6 00:16:53.577 Status Code Type: 0x0 00:16:53.577 Do Not Retry: 1 00:16:53.577 Error Location: 0xffff 00:16:53.577 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 2 00:16:53.578 Error Count: 0xf0b 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 3 00:16:53.578 Error Count: 0xf0a 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 4 00:16:53.578 Error Count: 0xf09 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 5 00:16:53.578 Error Count: 0xf08 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 6 00:16:53.578 Error Count: 0xf07 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 7 00:16:53.578 Error Count: 0xf06 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 8 00:16:53.578 Error Count: 0xf05 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 9 00:16:53.578 Error Count: 0xf04 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 10 00:16:53.578 Error Count: 0xf03 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 11 00:16:53.578 Error Count: 0xf02 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 12 00:16:53.578 Error Count: 0xf01 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 13 00:16:53.578 Error Count: 0xf00 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 14 00:16:53.578 Error Count: 0xeff 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 15 00:16:53.578 Error Count: 0xefe 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 16 00:16:53.578 Error Count: 0xefd 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 17 00:16:53.578 Error Count: 0xefc 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 18 00:16:53.578 Error Count: 0xefb 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 19 00:16:53.578 Error Count: 0xefa 00:16:53.578 Submission Queue Id: 0x2 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.578 LBA: 0x0 00:16:53.578 Namespace: 0xffffffff 00:16:53.578 Vendor Log Page: 0x0 00:16:53.578 ----------- 00:16:53.578 Entry: 20 00:16:53.578 Error Count: 0xef9 00:16:53.578 Submission Queue Id: 0x0 00:16:53.578 Command Id: 0xffff 00:16:53.578 Phase Bit: 0 00:16:53.578 Status Code: 0x6 00:16:53.578 Status Code Type: 0x0 00:16:53.578 Do Not Retry: 1 00:16:53.578 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 21 00:16:53.579 Error Count: 0xef8 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 22 00:16:53.579 Error Count: 0xef7 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 23 00:16:53.579 Error Count: 0xef6 00:16:53.579 Submission Queue Id: 0x0 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 24 00:16:53.579 Error Count: 0xef5 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 25 00:16:53.579 Error Count: 0xef4 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 26 00:16:53.579 Error Count: 0xef3 00:16:53.579 Submission Queue Id: 0x0 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 27 00:16:53.579 Error Count: 0xef2 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 28 00:16:53.579 Error Count: 0xef1 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 29 00:16:53.579 Error Count: 0xef0 00:16:53.579 Submission Queue Id: 0x0 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 30 00:16:53.579 Error Count: 0xeef 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 31 00:16:53.579 Error Count: 0xeee 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 32 00:16:53.579 Error Count: 0xeed 00:16:53.579 Submission Queue Id: 0x0 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 33 00:16:53.579 Error Count: 0xeec 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 34 00:16:53.579 Error Count: 0xeeb 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 35 00:16:53.579 Error Count: 0xeea 00:16:53.579 Submission Queue Id: 0x0 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 36 00:16:53.579 Error Count: 0xee9 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.579 Error Location: 0xffff 00:16:53.579 LBA: 0x0 00:16:53.579 Namespace: 0xffffffff 00:16:53.579 Vendor Log Page: 0x0 00:16:53.579 ----------- 00:16:53.579 Entry: 37 00:16:53.579 Error Count: 0xee8 00:16:53.579 Submission Queue Id: 0x2 00:16:53.579 Command Id: 0xffff 00:16:53.579 Phase Bit: 0 00:16:53.579 Status Code: 0x6 00:16:53.579 Status Code Type: 0x0 00:16:53.579 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 38 00:16:53.580 Error Count: 0xee7 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 39 00:16:53.580 Error Count: 0xee6 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 40 00:16:53.580 Error Count: 0xee5 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 41 00:16:53.580 Error Count: 0xee4 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 42 00:16:53.580 Error Count: 0xee3 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 43 00:16:53.580 Error Count: 0xee2 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 44 00:16:53.580 Error Count: 0xee1 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 45 00:16:53.580 Error Count: 0xee0 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 46 00:16:53.580 Error Count: 0xedf 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 47 00:16:53.580 Error Count: 0xede 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 48 00:16:53.580 Error Count: 0xedd 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 49 00:16:53.580 Error Count: 0xedc 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 50 00:16:53.580 Error Count: 0xedb 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 51 00:16:53.580 Error Count: 0xeda 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 52 00:16:53.580 Error Count: 0xed9 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 53 00:16:53.580 Error Count: 0xed8 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 54 00:16:53.580 Error Count: 0xed7 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 55 00:16:53.580 Error Count: 0xed6 00:16:53.580 Submission Queue Id: 0x2 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 56 00:16:53.580 Error Count: 0xed5 00:16:53.580 Submission Queue Id: 0x0 00:16:53.580 Command Id: 0xffff 00:16:53.580 Phase Bit: 0 00:16:53.580 Status Code: 0x6 00:16:53.580 Status Code Type: 0x0 00:16:53.580 Do Not Retry: 1 00:16:53.580 Error Location: 0xffff 00:16:53.580 LBA: 0x0 00:16:53.580 Namespace: 0xffffffff 00:16:53.580 Vendor Log Page: 0x0 00:16:53.580 ----------- 00:16:53.580 Entry: 57 00:16:53.580 Error Count: 0xed4 00:16:53.581 Submission Queue Id: 0x2 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 ----------- 00:16:53.581 Entry: 58 00:16:53.581 Error Count: 0xed3 00:16:53.581 Submission Queue Id: 0x2 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 ----------- 00:16:53.581 Entry: 59 00:16:53.581 Error Count: 0xed2 00:16:53.581 Submission Queue Id: 0x0 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 ----------- 00:16:53.581 Entry: 60 00:16:53.581 Error Count: 0xed1 00:16:53.581 Submission Queue Id: 0x2 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 ----------- 00:16:53.581 Entry: 61 00:16:53.581 Error Count: 0xed0 00:16:53.581 Submission Queue Id: 0x2 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 ----------- 00:16:53.581 Entry: 62 00:16:53.581 Error Count: 0xecf 00:16:53.581 Submission Queue Id: 0x0 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 ----------- 00:16:53.581 Entry: 63 00:16:53.581 Error Count: 0xece 00:16:53.581 Submission Queue Id: 0x2 00:16:53.581 Command Id: 0xffff 00:16:53.581 Phase Bit: 0 00:16:53.581 Status Code: 0x6 00:16:53.581 Status Code Type: 0x0 00:16:53.581 Do Not Retry: 1 00:16:53.581 Error Location: 0xffff 00:16:53.581 LBA: 0x0 00:16:53.581 Namespace: 0xffffffff 00:16:53.581 Vendor Log Page: 0x0 00:16:53.581 00:16:53.581 Arbitration 00:16:53.581 =========== 00:16:53.581 Arbitration Burst: 1 00:16:53.581 Low Priority Weight: 1 00:16:53.581 Medium Priority Weight: 1 00:16:53.581 High Priority Weight: 1 00:16:53.581 00:16:53.581 Power Management 00:16:53.581 ================ 00:16:53.581 Number of Power States: 1 00:16:53.581 Current Power State: Power State #0 00:16:53.581 Power State #0: 00:16:53.581 Max Power: 12.00 W 00:16:53.581 Non-Operational State: Operational 00:16:53.581 Entry Latency: Not Reported 00:16:53.581 Exit Latency: Not Reported 00:16:53.581 Relative Read Throughput: 0 00:16:53.581 Relative Read Latency: 0 00:16:53.581 Relative Write Throughput: 0 00:16:53.581 Relative Write Latency: 0 00:16:53.581 Idle Power: Not Reported 00:16:53.581 Active Power: Not Reported 00:16:53.581 Non-Operational Permissive Mode: Not Supported 00:16:53.581 00:16:53.581 Health Information 00:16:53.581 ================== 00:16:53.581 Critical Warnings: 00:16:53.581 Available Spare Space: OK 00:16:53.581 Temperature: OK 00:16:53.581 Device Reliability: OK 00:16:53.581 Read Only: No 00:16:53.581 Volatile Memory Backup: OK 00:16:53.581 Current Temperature: 312 Kelvin (39 Celsius) 00:16:53.581 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:53.581 Available Spare: 100% 00:16:53.581 Available Spare Threshold: 10% 00:16:53.581 Life Percentage Used: 19% 00:16:53.581 Data Units Read: 350143343 00:16:53.581 Data Units Written: 512063364 00:16:53.581 Host Read Commands: 15702333551 00:16:53.581 Host Write Commands: 20945629911 00:16:53.581 Controller Busy Time: 3191 minutes 00:16:53.581 Power Cycles: 859 00:16:53.581 Power On Hours: 40904 hours 00:16:53.581 Unsafe Shutdowns: 736 00:16:53.581 Unrecoverable Media Errors: 0 00:16:53.581 Lifetime Error Log Entries: 3855 00:16:53.581 Warning Temperature Time: 377 minutes 00:16:53.581 Critical Temperature Time: 0 minutes 00:16:53.581 00:16:53.581 Number of Queues 00:16:53.581 ================ 00:16:53.581 Number of I/O Submission Queues: 128 00:16:53.581 Number of I/O Completion Queues: 128 00:16:53.581 00:16:53.581 Intel Health Information 00:16:53.581 ================== 00:16:53.581 Program Fail Count: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 0 00:16:53.581 Erase Fail Count: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 0 00:16:53.581 Wear Leveling Count: 00:16:53.581 Normalized Value : 81 00:16:53.581 Current Raw Value: 00:16:53.581 Min: 579 00:16:53.581 Max: 994 00:16:53.581 Avg: 950 00:16:53.581 End to End Error Detection Count: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 0 00:16:53.581 CRC Error Count: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 0 00:16:53.581 Timed Workload, Media Wear: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 65535 00:16:53.581 Timed Workload, Host Read/Write Ratio: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 65535% 00:16:53.581 Timed Workload, Timer: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 65535 00:16:53.581 Thermal Throttle Status: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 00:16:53.581 Percentage: 0% 00:16:53.581 Throttling Event Count: 0 00:16:53.581 Retry Buffer Overflow Counter: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 0 00:16:53.581 PLL Lock Loss Count: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 0 00:16:53.581 NAND Bytes Written: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 20761360 00:16:53.581 Host Bytes Written: 00:16:53.581 Normalized Value : 100 00:16:53.581 Current Raw Value: 7813466 00:16:53.581 00:16:53.581 Intel Temperature Information 00:16:53.581 ================== 00:16:53.581 Current Temperature: 39 00:16:53.581 Overtemp shutdown Flag for last critical component temperature: 0 00:16:53.581 Overtemp shutdown Flag for life critical component temperature: 0 00:16:53.581 Highest temperature: 43 00:16:53.581 Lowest temperature: 19 00:16:53.581 Specified Maximum Operating Temperature: 70 00:16:53.581 Specified Minimum Operating Temperature: 0 00:16:53.581 Estimated offset: 0 00:16:53.581 00:16:53.581 00:16:53.581 Intel Marketing Information 00:16:53.581 ================== 00:16:53.581 Marketing Product Information: Intel(R) SSD DC P4510 Series 00:16:53.581 00:16:53.581 00:16:53.581 Active Namespaces 00:16:53.581 ================= 00:16:53.581 Namespace ID:1 00:16:53.581 Error Recovery Timeout: Unlimited 00:16:53.581 Command Set Identifier: NVM (00h) 00:16:53.581 Deallocate: Supported 00:16:53.581 Deallocated/Unwritten Error: Not Supported 00:16:53.581 Deallocated Read Value: All 0x00 00:16:53.581 Deallocate in Write Zeroes: Not Supported 00:16:53.582 Deallocated Guard Field: 0xFFFF 00:16:53.582 Flush: Not Supported 00:16:53.582 Reservation: Not Supported 00:16:53.582 Namespace Sharing Capabilities: Private 00:16:53.582 Size (in LBAs): 1953525168 (931GiB) 00:16:53.582 Capacity (in LBAs): 1953525168 (931GiB) 00:16:53.582 Utilization (in LBAs): 1953525168 (931GiB) 00:16:53.582 NGUID: 01000000492A00005CD2E467BED34D51 00:16:53.582 EUI64: 5CD2E467BED35239 00:16:53.582 Thin Provisioning: Not Supported 00:16:53.582 Per-NS Atomic Units: No 00:16:53.582 NGUID/EUI64 Never Reused: No 00:16:53.582 Namespace Write Protected: No 00:16:53.582 Number of LBA Formats: 2 00:16:53.582 Current LBA Format: LBA Format #00 00:16:53.582 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:53.582 LBA Format #01: Data Size: 4096 Metadata Size: 0 00:16:53.582 00:16:53.582 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:53.582 23:58:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:16:53.582 ===================================================== 00:16:53.582 NVMe Controller at 0000:84:00.0 [8086:0a54] 00:16:53.582 ===================================================== 00:16:53.582 Controller Capabilities/Features 00:16:53.582 ================================ 00:16:53.582 Vendor ID: 8086 00:16:53.582 Subsystem Vendor ID: 8086 00:16:53.582 Serial Number: BTLJ724400Z71P0FGN 00:16:53.582 Model Number: INTEL SSDPE2KX010T8 00:16:53.582 Firmware Version: VDV10184 00:16:53.582 Recommended Arb Burst: 0 00:16:53.582 IEEE OUI Identifier: e4 d2 5c 00:16:53.582 Multi-path I/O 00:16:53.582 May have multiple subsystem ports: No 00:16:53.582 May have multiple controllers: No 00:16:53.582 Associated with SR-IOV VF: No 00:16:53.582 Max Data Transfer Size: 131072 00:16:53.582 Max Number of Namespaces: 128 00:16:53.582 Max Number of I/O Queues: 128 00:16:53.582 NVMe Specification Version (VS): 1.2 00:16:53.582 NVMe Specification Version (Identify): 1.2 00:16:53.582 Maximum Queue Entries: 4096 00:16:53.582 Contiguous Queues Required: Yes 00:16:53.582 Arbitration Mechanisms Supported 00:16:53.582 Weighted Round Robin: Supported 00:16:53.582 Vendor Specific: Not Supported 00:16:53.582 Reset Timeout: 60000 ms 00:16:53.582 Doorbell Stride: 4 bytes 00:16:53.582 NVM Subsystem Reset: Not Supported 00:16:53.582 Command Sets Supported 00:16:53.582 NVM Command Set: Supported 00:16:53.582 Boot Partition: Not Supported 00:16:53.582 Memory Page Size Minimum: 4096 bytes 00:16:53.582 Memory Page Size Maximum: 4096 bytes 00:16:53.582 Persistent Memory Region: Not Supported 00:16:53.582 Optional Asynchronous Events Supported 00:16:53.582 Namespace Attribute Notices: Not Supported 00:16:53.582 Firmware Activation Notices: Supported 00:16:53.582 ANA Change Notices: Not Supported 00:16:53.582 PLE Aggregate Log Change Notices: Not Supported 00:16:53.582 LBA Status Info Alert Notices: Not Supported 00:16:53.582 EGE Aggregate Log Change Notices: Not Supported 00:16:53.582 Normal NVM Subsystem Shutdown event: Not Supported 00:16:53.582 Zone Descriptor Change Notices: Not Supported 00:16:53.582 Discovery Log Change Notices: Not Supported 00:16:53.582 Controller Attributes 00:16:53.582 128-bit Host Identifier: Not Supported 00:16:53.582 Non-Operational Permissive Mode: Not Supported 00:16:53.582 NVM Sets: Not Supported 00:16:53.582 Read Recovery Levels: Not Supported 00:16:53.582 Endurance Groups: Not Supported 00:16:53.582 Predictable Latency Mode: Not Supported 00:16:53.582 Traffic Based Keep ALive: Not Supported 00:16:53.582 Namespace Granularity: Not Supported 00:16:53.582 SQ Associations: Not Supported 00:16:53.582 UUID List: Not Supported 00:16:53.582 Multi-Domain Subsystem: Not Supported 00:16:53.582 Fixed Capacity Management: Not Supported 00:16:53.582 Variable Capacity Management: Not Supported 00:16:53.582 Delete Endurance Group: Not Supported 00:16:53.582 Delete NVM Set: Not Supported 00:16:53.582 Extended LBA Formats Supported: Not Supported 00:16:53.582 Flexible Data Placement Supported: Not Supported 00:16:53.582 00:16:53.582 Controller Memory Buffer Support 00:16:53.582 ================================ 00:16:53.582 Supported: No 00:16:53.582 00:16:53.582 Persistent Memory Region Support 00:16:53.582 ================================ 00:16:53.582 Supported: No 00:16:53.582 00:16:53.582 Admin Command Set Attributes 00:16:53.582 ============================ 00:16:53.582 Security Send/Receive: Not Supported 00:16:53.582 Format NVM: Supported 00:16:53.582 Firmware Activate/Download: Supported 00:16:53.582 Namespace Management: Supported 00:16:53.582 Device Self-Test: Not Supported 00:16:53.582 Directives: Not Supported 00:16:53.582 NVMe-MI: Not Supported 00:16:53.582 Virtualization Management: Not Supported 00:16:53.582 Doorbell Buffer Config: Not Supported 00:16:53.582 Get LBA Status Capability: Not Supported 00:16:53.582 Command & Feature Lockdown Capability: Not Supported 00:16:53.582 Abort Command Limit: 4 00:16:53.582 Async Event Request Limit: 4 00:16:53.582 Number of Firmware Slots: 2 00:16:53.582 Firmware Slot 1 Read-Only: No 00:16:53.582 Firmware Activation Without Reset: Yes 00:16:53.582 Multiple Update Detection Support: No 00:16:53.582 Firmware Update Granularity: No Information Provided 00:16:53.582 Per-Namespace SMART Log: No 00:16:53.582 Asymmetric Namespace Access Log Page: Not Supported 00:16:53.582 Subsystem NQN: 00:16:53.582 Command Effects Log Page: Supported 00:16:53.582 Get Log Page Extended Data: Supported 00:16:53.582 Telemetry Log Pages: Supported 00:16:53.582 Persistent Event Log Pages: Not Supported 00:16:53.582 Supported Log Pages Log Page: May Support 00:16:53.582 Commands Supported & Effects Log Page: Not Supported 00:16:53.582 Feature Identifiers & Effects Log Page:May Support 00:16:53.582 NVMe-MI Commands & Effects Log Page: May Support 00:16:53.582 Data Area 4 for Telemetry Log: Not Supported 00:16:53.582 Error Log Page Entries Supported: 64 00:16:53.582 Keep Alive: Not Supported 00:16:53.582 00:16:53.582 NVM Command Set Attributes 00:16:53.582 ========================== 00:16:53.582 Submission Queue Entry Size 00:16:53.582 Max: 64 00:16:53.582 Min: 64 00:16:53.582 Completion Queue Entry Size 00:16:53.582 Max: 16 00:16:53.582 Min: 16 00:16:53.582 Number of Namespaces: 128 00:16:53.582 Compare Command: Not Supported 00:16:53.582 Write Uncorrectable Command: Supported 00:16:53.582 Dataset Management Command: Supported 00:16:53.582 Write Zeroes Command: Not Supported 00:16:53.582 Set Features Save Field: Not Supported 00:16:53.582 Reservations: Not Supported 00:16:53.582 Timestamp: Not Supported 00:16:53.582 Copy: Not Supported 00:16:53.582 Volatile Write Cache: Not Present 00:16:53.582 Atomic Write Unit (Normal): 1 00:16:53.582 Atomic Write Unit (PFail): 1 00:16:53.582 Atomic Compare & Write Unit: 1 00:16:53.582 Fused Compare & Write: Not Supported 00:16:53.582 Scatter-Gather List 00:16:53.582 SGL Command Set: Not Supported 00:16:53.582 SGL Keyed: Not Supported 00:16:53.582 SGL Bit Bucket Descriptor: Not Supported 00:16:53.582 SGL Metadata Pointer: Not Supported 00:16:53.582 Oversized SGL: Not Supported 00:16:53.582 SGL Metadata Address: Not Supported 00:16:53.582 SGL Offset: Not Supported 00:16:53.582 Transport SGL Data Block: Not Supported 00:16:53.582 Replay Protected Memory Block: Not Supported 00:16:53.582 00:16:53.582 Firmware Slot Information 00:16:53.582 ========================= 00:16:53.582 Active slot: 1 00:16:53.582 Slot 1 Firmware Revision: VDV10184 00:16:53.582 00:16:53.582 00:16:53.582 Commands Supported and Effects 00:16:53.582 ============================== 00:16:53.582 Admin Commands 00:16:53.582 -------------- 00:16:53.582 Delete I/O Submission Queue (00h): Supported 00:16:53.582 Create I/O Submission Queue (01h): Supported All-NS-Exclusive 00:16:53.582 Get Log Page (02h): Supported 00:16:53.582 Delete I/O Completion Queue (04h): Supported 00:16:53.582 Create I/O Completion Queue (05h): Supported All-NS-Exclusive 00:16:53.582 Identify (06h): Supported 00:16:53.582 Abort (08h): Supported 00:16:53.582 Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 00:16:53.582 Get Features (0Ah): Supported 00:16:53.582 Asynchronous Event Request (0Ch): Supported 00:16:53.582 Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive 00:16:53.582 Firmware Commit (10h): Supported Ctrlr-Cap-Change 00:16:53.582 Firmware Image Download (11h): Supported 00:16:53.582 Namespace Attachment (15h): Supported Per-NS-Exclusive 00:16:53.582 Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive 00:16:53.582 Vendor specific (C8h): Supported 00:16:53.582 Vendor specific (D2h): Supported 00:16:53.582 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:53.582 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:16:53.582 I/O Commands 00:16:53.582 ------------ 00:16:53.582 Flush (00h): Supported LBA-Change 00:16:53.582 Write (01h): Supported LBA-Change 00:16:53.582 Read (02h): Supported 00:16:53.582 Write Uncorrectable (04h): Supported LBA-Change 00:16:53.582 Dataset Management (09h): Supported LBA-Change 00:16:53.582 00:16:53.582 Error Log 00:16:53.582 ========= 00:16:53.582 Entry: 0 00:16:53.583 Error Count: 0xf0d 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 1 00:16:53.583 Error Count: 0xf0c 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 2 00:16:53.583 Error Count: 0xf0b 00:16:53.583 Submission Queue Id: 0x0 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 3 00:16:53.583 Error Count: 0xf0a 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 4 00:16:53.583 Error Count: 0xf09 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 5 00:16:53.583 Error Count: 0xf08 00:16:53.583 Submission Queue Id: 0x0 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 6 00:16:53.583 Error Count: 0xf07 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 7 00:16:53.583 Error Count: 0xf06 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 8 00:16:53.583 Error Count: 0xf05 00:16:53.583 Submission Queue Id: 0x0 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 9 00:16:53.583 Error Count: 0xf04 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 10 00:16:53.583 Error Count: 0xf03 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 11 00:16:53.583 Error Count: 0xf02 00:16:53.583 Submission Queue Id: 0x0 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 12 00:16:53.583 Error Count: 0xf01 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 13 00:16:53.583 Error Count: 0xf00 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 14 00:16:53.583 Error Count: 0xeff 00:16:53.583 Submission Queue Id: 0x0 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 15 00:16:53.583 Error Count: 0xefe 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.583 Status Code Type: 0x0 00:16:53.583 Do Not Retry: 1 00:16:53.583 Error Location: 0xffff 00:16:53.583 LBA: 0x0 00:16:53.583 Namespace: 0xffffffff 00:16:53.583 Vendor Log Page: 0x0 00:16:53.583 ----------- 00:16:53.583 Entry: 16 00:16:53.583 Error Count: 0xefd 00:16:53.583 Submission Queue Id: 0x2 00:16:53.583 Command Id: 0xffff 00:16:53.583 Phase Bit: 0 00:16:53.583 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 17 00:16:53.584 Error Count: 0xefc 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 18 00:16:53.584 Error Count: 0xefb 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 19 00:16:53.584 Error Count: 0xefa 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 20 00:16:53.584 Error Count: 0xef9 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 21 00:16:53.584 Error Count: 0xef8 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 22 00:16:53.584 Error Count: 0xef7 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 23 00:16:53.584 Error Count: 0xef6 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 24 00:16:53.584 Error Count: 0xef5 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 25 00:16:53.584 Error Count: 0xef4 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 26 00:16:53.584 Error Count: 0xef3 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 27 00:16:53.584 Error Count: 0xef2 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 28 00:16:53.584 Error Count: 0xef1 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 29 00:16:53.584 Error Count: 0xef0 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 30 00:16:53.584 Error Count: 0xeef 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 31 00:16:53.584 Error Count: 0xeee 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 32 00:16:53.584 Error Count: 0xeed 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 33 00:16:53.584 Error Count: 0xeec 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 34 00:16:53.584 Error Count: 0xeeb 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 35 00:16:53.584 Error Count: 0xeea 00:16:53.584 Submission Queue Id: 0x0 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.584 Error Location: 0xffff 00:16:53.584 LBA: 0x0 00:16:53.584 Namespace: 0xffffffff 00:16:53.584 Vendor Log Page: 0x0 00:16:53.584 ----------- 00:16:53.584 Entry: 36 00:16:53.584 Error Count: 0xee9 00:16:53.584 Submission Queue Id: 0x2 00:16:53.584 Command Id: 0xffff 00:16:53.584 Phase Bit: 0 00:16:53.584 Status Code: 0x6 00:16:53.584 Status Code Type: 0x0 00:16:53.584 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 37 00:16:53.585 Error Count: 0xee8 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 38 00:16:53.585 Error Count: 0xee7 00:16:53.585 Submission Queue Id: 0x0 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 39 00:16:53.585 Error Count: 0xee6 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 40 00:16:53.585 Error Count: 0xee5 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 41 00:16:53.585 Error Count: 0xee4 00:16:53.585 Submission Queue Id: 0x0 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 42 00:16:53.585 Error Count: 0xee3 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 43 00:16:53.585 Error Count: 0xee2 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 44 00:16:53.585 Error Count: 0xee1 00:16:53.585 Submission Queue Id: 0x0 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 45 00:16:53.585 Error Count: 0xee0 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 46 00:16:53.585 Error Count: 0xedf 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 47 00:16:53.585 Error Count: 0xede 00:16:53.585 Submission Queue Id: 0x0 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 48 00:16:53.585 Error Count: 0xedd 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 49 00:16:53.585 Error Count: 0xedc 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 50 00:16:53.585 Error Count: 0xedb 00:16:53.585 Submission Queue Id: 0x0 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 51 00:16:53.585 Error Count: 0xeda 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 52 00:16:53.585 Error Count: 0xed9 00:16:53.585 Submission Queue Id: 0x2 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.585 Namespace: 0xffffffff 00:16:53.585 Vendor Log Page: 0x0 00:16:53.585 ----------- 00:16:53.585 Entry: 53 00:16:53.585 Error Count: 0xed8 00:16:53.585 Submission Queue Id: 0x0 00:16:53.585 Command Id: 0xffff 00:16:53.585 Phase Bit: 0 00:16:53.585 Status Code: 0x6 00:16:53.585 Status Code Type: 0x0 00:16:53.585 Do Not Retry: 1 00:16:53.585 Error Location: 0xffff 00:16:53.585 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 54 00:16:53.586 Error Count: 0xed7 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 55 00:16:53.586 Error Count: 0xed6 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 56 00:16:53.586 Error Count: 0xed5 00:16:53.586 Submission Queue Id: 0x0 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 57 00:16:53.586 Error Count: 0xed4 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 58 00:16:53.586 Error Count: 0xed3 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 59 00:16:53.586 Error Count: 0xed2 00:16:53.586 Submission Queue Id: 0x0 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 60 00:16:53.586 Error Count: 0xed1 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 61 00:16:53.586 Error Count: 0xed0 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 62 00:16:53.586 Error Count: 0xecf 00:16:53.586 Submission Queue Id: 0x0 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 ----------- 00:16:53.586 Entry: 63 00:16:53.586 Error Count: 0xece 00:16:53.586 Submission Queue Id: 0x2 00:16:53.586 Command Id: 0xffff 00:16:53.586 Phase Bit: 0 00:16:53.586 Status Code: 0x6 00:16:53.586 Status Code Type: 0x0 00:16:53.586 Do Not Retry: 1 00:16:53.586 Error Location: 0xffff 00:16:53.586 LBA: 0x0 00:16:53.586 Namespace: 0xffffffff 00:16:53.586 Vendor Log Page: 0x0 00:16:53.586 00:16:53.586 Arbitration 00:16:53.586 =========== 00:16:53.586 Arbitration Burst: 1 00:16:53.586 Low Priority Weight: 1 00:16:53.586 Medium Priority Weight: 1 00:16:53.586 High Priority Weight: 1 00:16:53.586 00:16:53.586 Power Management 00:16:53.586 ================ 00:16:53.586 Number of Power States: 1 00:16:53.586 Current Power State: Power State #0 00:16:53.586 Power State #0: 00:16:53.586 Max Power: 12.00 W 00:16:53.586 Non-Operational State: Operational 00:16:53.586 Entry Latency: Not Reported 00:16:53.586 Exit Latency: Not Reported 00:16:53.586 Relative Read Throughput: 0 00:16:53.586 Relative Read Latency: 0 00:16:53.586 Relative Write Throughput: 0 00:16:53.586 Relative Write Latency: 0 00:16:53.586 Idle Power: Not Reported 00:16:53.586 Active Power: Not Reported 00:16:53.586 Non-Operational Permissive Mode: Not Supported 00:16:53.586 00:16:53.586 Health Information 00:16:53.586 ================== 00:16:53.586 Critical Warnings: 00:16:53.586 Available Spare Space: OK 00:16:53.586 Temperature: OK 00:16:53.586 Device Reliability: OK 00:16:53.586 Read Only: No 00:16:53.586 Volatile Memory Backup: OK 00:16:53.586 Current Temperature: 312 Kelvin (39 Celsius) 00:16:53.586 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:53.586 Available Spare: 100% 00:16:53.586 Available Spare Threshold: 10% 00:16:53.586 Life Percentage Used: 19% 00:16:53.586 Data Units Read: 350143343 00:16:53.586 Data Units Written: 512063364 00:16:53.586 Host Read Commands: 15702333551 00:16:53.586 Host Write Commands: 20945629911 00:16:53.586 Controller Busy Time: 3191 minutes 00:16:53.586 Power Cycles: 859 00:16:53.586 Power On Hours: 40904 hours 00:16:53.586 Unsafe Shutdowns: 736 00:16:53.586 Unrecoverable Media Errors: 0 00:16:53.586 Lifetime Error Log Entries: 3855 00:16:53.586 Warning Temperature Time: 377 minutes 00:16:53.586 Critical Temperature Time: 0 minutes 00:16:53.586 00:16:53.586 Number of Queues 00:16:53.586 ================ 00:16:53.586 Number of I/O Submission Queues: 128 00:16:53.586 Number of I/O Completion Queues: 128 00:16:53.586 00:16:53.586 Intel Health Information 00:16:53.586 ================== 00:16:53.586 Program Fail Count: 00:16:53.586 Normalized Value : 100 00:16:53.586 Current Raw Value: 0 00:16:53.586 Erase Fail Count: 00:16:53.586 Normalized Value : 100 00:16:53.586 Current Raw Value: 0 00:16:53.586 Wear Leveling Count: 00:16:53.586 Normalized Value : 81 00:16:53.586 Current Raw Value: 00:16:53.586 Min: 579 00:16:53.586 Max: 994 00:16:53.586 Avg: 950 00:16:53.586 End to End Error Detection Count: 00:16:53.586 Normalized Value : 100 00:16:53.586 Current Raw Value: 0 00:16:53.586 CRC Error Count: 00:16:53.586 Normalized Value : 100 00:16:53.586 Current Raw Value: 0 00:16:53.586 Timed Workload, Media Wear: 00:16:53.586 Normalized Value : 100 00:16:53.586 Current Raw Value: 65535 00:16:53.586 Timed Workload, Host Read/Write Ratio: 00:16:53.586 Normalized Value : 100 00:16:53.586 Current Raw Value: 65535% 00:16:53.586 Timed Workload, Timer: 00:16:53.586 Normalized Value : 100 00:16:53.587 Current Raw Value: 65535 00:16:53.587 Thermal Throttle Status: 00:16:53.587 Normalized Value : 100 00:16:53.587 Current Raw Value: 00:16:53.587 Percentage: 0% 00:16:53.587 Throttling Event Count: 0 00:16:53.587 Retry Buffer Overflow Counter: 00:16:53.587 Normalized Value : 100 00:16:53.587 Current Raw Value: 0 00:16:53.587 PLL Lock Loss Count: 00:16:53.587 Normalized Value : 100 00:16:53.587 Current Raw Value: 0 00:16:53.587 NAND Bytes Written: 00:16:53.587 Normalized Value : 100 00:16:53.587 Current Raw Value: 20761360 00:16:53.587 Host Bytes Written: 00:16:53.587 Normalized Value : 100 00:16:53.587 Current Raw Value: 7813466 00:16:53.587 00:16:53.587 Intel Temperature Information 00:16:53.587 ================== 00:16:53.587 Current Temperature: 39 00:16:53.587 Overtemp shutdown Flag for last critical component temperature: 0 00:16:53.587 Overtemp shutdown Flag for life critical component temperature: 0 00:16:53.587 Highest temperature: 43 00:16:53.587 Lowest temperature: 19 00:16:53.587 Specified Maximum Operating Temperature: 70 00:16:53.587 Specified Minimum Operating Temperature: 0 00:16:53.587 Estimated offset: 0 00:16:53.587 00:16:53.587 00:16:53.587 Intel Marketing Information 00:16:53.587 ================== 00:16:53.587 Marketing Product Information: Intel(R) SSD DC P4510 Series 00:16:53.587 00:16:53.587 00:16:53.587 Active Namespaces 00:16:53.587 ================= 00:16:53.587 Namespace ID:1 00:16:53.587 Error Recovery Timeout: Unlimited 00:16:53.587 Command Set Identifier: NVM (00h) 00:16:53.587 Deallocate: Supported 00:16:53.587 Deallocated/Unwritten Error: Not Supported 00:16:53.587 Deallocated Read Value: All 0x00 00:16:53.587 Deallocate in Write Zeroes: Not Supported 00:16:53.587 Deallocated Guard Field: 0xFFFF 00:16:53.587 Flush: Not Supported 00:16:53.587 Reservation: Not Supported 00:16:53.587 Namespace Sharing Capabilities: Private 00:16:53.587 Size (in LBAs): 1953525168 (931GiB) 00:16:53.587 Capacity (in LBAs): 1953525168 (931GiB) 00:16:53.587 Utilization (in LBAs): 1953525168 (931GiB) 00:16:53.587 NGUID: 01000000492A00005CD2E467BED34D51 00:16:53.587 EUI64: 5CD2E467BED35239 00:16:53.587 Thin Provisioning: Not Supported 00:16:53.587 Per-NS Atomic Units: No 00:16:53.587 NGUID/EUI64 Never Reused: No 00:16:53.587 Namespace Write Protected: No 00:16:53.587 Number of LBA Formats: 2 00:16:53.587 Current LBA Format: LBA Format #00 00:16:53.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:53.587 LBA Format #01: Data Size: 4096 Metadata Size: 0 00:16:53.587 00:16:53.587 00:16:53.587 real 0m0.650s 00:16:53.587 user 0m0.217s 00:16:53.587 sys 0m0.326s 00:16:53.587 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.587 23:58:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:16:53.587 ************************************ 00:16:53.587 END TEST nvme_identify 00:16:53.587 ************************************ 00:16:53.587 23:58:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:16:53.587 23:58:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:53.587 23:58:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.587 23:58:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.587 ************************************ 00:16:53.587 START TEST nvme_perf 00:16:53.587 ************************************ 00:16:53.587 23:58:32 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:16:53.587 23:58:32 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:16:54.967 Initializing NVMe Controllers 00:16:54.967 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:16:54.967 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:16:54.967 Initialization complete. Launching workers. 00:16:54.967 ======================================================== 00:16:54.967 Latency(us) 00:16:54.967 Device Information : IOPS MiB/s Average min max 00:16:54.967 PCIE (0000:84:00.0) NSID 1 from core 0: 123707.00 1449.69 1034.01 80.32 2758.07 00:16:54.967 ======================================================== 00:16:54.967 Total : 123707.00 1449.69 1034.01 80.32 2758.07 00:16:54.967 00:16:54.967 Summary latency data for PCIE (0000:84:00.0) NSID 1 from core 0: 00:16:54.967 ================================================================================= 00:16:54.967 1.00000% : 320.095us 00:16:54.967 10.00000% : 634.121us 00:16:54.967 25.00000% : 819.200us 00:16:54.967 50.00000% : 1031.585us 00:16:54.967 75.00000% : 1250.039us 00:16:54.967 90.00000% : 1456.356us 00:16:54.967 95.00000% : 1577.719us 00:16:54.967 98.00000% : 1699.081us 00:16:54.967 99.00000% : 1771.899us 00:16:54.967 99.50000% : 1844.717us 00:16:54.967 99.90000% : 2026.761us 00:16:54.967 99.99000% : 2609.304us 00:16:54.967 99.99900% : 2767.076us 00:16:54.967 99.99990% : 2767.076us 00:16:54.967 99.99999% : 2767.076us 00:16:54.967 00:16:54.967 Latency histogram for PCIE (0000:84:00.0) NSID 1 from core 0: 00:16:54.967 ============================================================================== 00:16:54.967 Range in us Cumulative IO count 00:16:54.967 80.024 - 80.403: 0.0008% ( 1) 00:16:54.967 82.679 - 83.058: 0.0016% ( 1) 00:16:54.967 89.126 - 89.505: 0.0024% ( 1) 00:16:54.967 89.505 - 89.884: 0.0032% ( 1) 00:16:54.967 89.884 - 90.264: 0.0040% ( 1) 00:16:54.967 90.264 - 90.643: 0.0065% ( 3) 00:16:54.967 91.401 - 91.781: 0.0073% ( 1) 00:16:54.967 95.953 - 96.332: 0.0081% ( 1) 00:16:54.967 96.332 - 96.711: 0.0089% ( 1) 00:16:54.967 98.607 - 99.366: 0.0097% ( 1) 00:16:54.967 100.124 - 100.883: 0.0113% ( 2) 00:16:54.967 100.883 - 101.641: 0.0129% ( 2) 00:16:54.967 102.400 - 103.159: 0.0137% ( 1) 00:16:54.967 106.193 - 106.951: 0.0162% ( 3) 00:16:54.967 107.710 - 108.468: 0.0178% ( 2) 00:16:54.967 108.468 - 109.227: 0.0186% ( 1) 00:16:54.967 109.227 - 109.985: 0.0194% ( 1) 00:16:54.967 111.502 - 112.261: 0.0202% ( 1) 00:16:54.967 112.261 - 113.019: 0.0218% ( 2) 00:16:54.967 114.536 - 115.295: 0.0226% ( 1) 00:16:54.967 115.295 - 116.053: 0.0251% ( 3) 00:16:54.967 116.812 - 117.570: 0.0275% ( 3) 00:16:54.967 119.087 - 119.846: 0.0283% ( 1) 00:16:54.967 120.604 - 121.363: 0.0291% ( 1) 00:16:54.967 122.121 - 122.880: 0.0307% ( 2) 00:16:54.967 122.880 - 123.639: 0.0323% ( 2) 00:16:54.967 123.639 - 124.397: 0.0348% ( 3) 00:16:54.967 125.156 - 125.914: 0.0356% ( 1) 00:16:54.967 125.914 - 126.673: 0.0372% ( 2) 00:16:54.967 126.673 - 127.431: 0.0396% ( 3) 00:16:54.967 127.431 - 128.190: 0.0404% ( 1) 00:16:54.967 128.190 - 128.948: 0.0428% ( 3) 00:16:54.967 128.948 - 129.707: 0.0453% ( 3) 00:16:54.967 129.707 - 130.465: 0.0469% ( 2) 00:16:54.967 130.465 - 131.224: 0.0485% ( 2) 00:16:54.967 131.224 - 131.982: 0.0509% ( 3) 00:16:54.967 131.982 - 132.741: 0.0525% ( 2) 00:16:54.967 132.741 - 133.499: 0.0542% ( 2) 00:16:54.967 134.258 - 135.016: 0.0574% ( 4) 00:16:54.967 135.016 - 135.775: 0.0622% ( 6) 00:16:54.967 135.775 - 136.533: 0.0639% ( 2) 00:16:54.967 137.292 - 138.050: 0.0655% ( 2) 00:16:54.967 138.050 - 138.809: 0.0671% ( 2) 00:16:54.967 138.809 - 139.567: 0.0687% ( 2) 00:16:54.968 139.567 - 140.326: 0.0711% ( 3) 00:16:54.968 140.326 - 141.084: 0.0736% ( 3) 00:16:54.968 141.084 - 141.843: 0.0744% ( 1) 00:16:54.968 141.843 - 142.601: 0.0768% ( 3) 00:16:54.968 143.360 - 144.119: 0.0792% ( 3) 00:16:54.968 144.119 - 144.877: 0.0833% ( 5) 00:16:54.968 144.877 - 145.636: 0.0865% ( 4) 00:16:54.968 145.636 - 146.394: 0.0897% ( 4) 00:16:54.968 146.394 - 147.153: 0.0905% ( 1) 00:16:54.968 147.153 - 147.911: 0.0913% ( 1) 00:16:54.968 147.911 - 148.670: 0.0930% ( 2) 00:16:54.968 148.670 - 149.428: 0.0946% ( 2) 00:16:54.968 149.428 - 150.187: 0.0962% ( 2) 00:16:54.968 150.187 - 150.945: 0.0986% ( 3) 00:16:54.968 150.945 - 151.704: 0.1010% ( 3) 00:16:54.968 151.704 - 152.462: 0.1019% ( 1) 00:16:54.968 152.462 - 153.221: 0.1027% ( 1) 00:16:54.968 153.221 - 153.979: 0.1043% ( 2) 00:16:54.968 153.979 - 154.738: 0.1051% ( 1) 00:16:54.968 154.738 - 155.496: 0.1059% ( 1) 00:16:54.968 155.496 - 156.255: 0.1075% ( 2) 00:16:54.968 156.255 - 157.013: 0.1099% ( 3) 00:16:54.968 157.013 - 157.772: 0.1116% ( 2) 00:16:54.968 157.772 - 158.530: 0.1132% ( 2) 00:16:54.968 158.530 - 159.289: 0.1140% ( 1) 00:16:54.968 159.289 - 160.047: 0.1156% ( 2) 00:16:54.968 160.047 - 160.806: 0.1188% ( 4) 00:16:54.968 160.806 - 161.564: 0.1204% ( 2) 00:16:54.968 161.564 - 162.323: 0.1221% ( 2) 00:16:54.968 162.323 - 163.081: 0.1229% ( 1) 00:16:54.968 163.840 - 164.599: 0.1253% ( 3) 00:16:54.968 164.599 - 165.357: 0.1277% ( 3) 00:16:54.968 165.357 - 166.116: 0.1310% ( 4) 00:16:54.968 166.116 - 166.874: 0.1326% ( 2) 00:16:54.968 167.633 - 168.391: 0.1350% ( 3) 00:16:54.968 168.391 - 169.150: 0.1374% ( 3) 00:16:54.968 169.150 - 169.908: 0.1382% ( 1) 00:16:54.968 169.908 - 170.667: 0.1407% ( 3) 00:16:54.968 170.667 - 171.425: 0.1423% ( 2) 00:16:54.968 171.425 - 172.184: 0.1447% ( 3) 00:16:54.968 172.942 - 173.701: 0.1463% ( 2) 00:16:54.968 173.701 - 174.459: 0.1471% ( 1) 00:16:54.968 174.459 - 175.218: 0.1487% ( 2) 00:16:54.968 175.218 - 175.976: 0.1504% ( 2) 00:16:54.968 175.976 - 176.735: 0.1536% ( 4) 00:16:54.968 176.735 - 177.493: 0.1552% ( 2) 00:16:54.968 177.493 - 178.252: 0.1568% ( 2) 00:16:54.968 178.252 - 179.010: 0.1601% ( 4) 00:16:54.968 179.010 - 179.769: 0.1617% ( 2) 00:16:54.968 179.769 - 180.527: 0.1633% ( 2) 00:16:54.968 180.527 - 181.286: 0.1641% ( 1) 00:16:54.968 181.286 - 182.044: 0.1665% ( 3) 00:16:54.968 182.044 - 182.803: 0.1706% ( 5) 00:16:54.968 182.803 - 183.561: 0.1746% ( 5) 00:16:54.968 183.561 - 184.320: 0.1762% ( 2) 00:16:54.968 185.079 - 185.837: 0.1786% ( 3) 00:16:54.968 185.837 - 186.596: 0.1827% ( 5) 00:16:54.968 186.596 - 187.354: 0.1843% ( 2) 00:16:54.968 187.354 - 188.113: 0.1875% ( 4) 00:16:54.968 188.113 - 188.871: 0.1900% ( 3) 00:16:54.968 189.630 - 190.388: 0.1908% ( 1) 00:16:54.968 190.388 - 191.147: 0.1948% ( 5) 00:16:54.968 191.147 - 191.905: 0.1972% ( 3) 00:16:54.968 191.905 - 192.664: 0.2013% ( 5) 00:16:54.968 192.664 - 193.422: 0.2061% ( 6) 00:16:54.968 193.422 - 194.181: 0.2077% ( 2) 00:16:54.968 194.181 - 195.698: 0.2094% ( 2) 00:16:54.968 195.698 - 197.215: 0.2142% ( 6) 00:16:54.968 197.215 - 198.732: 0.2183% ( 5) 00:16:54.968 198.732 - 200.249: 0.2231% ( 6) 00:16:54.968 200.249 - 201.766: 0.2280% ( 6) 00:16:54.968 201.766 - 203.283: 0.2352% ( 9) 00:16:54.968 203.283 - 204.800: 0.2393% ( 5) 00:16:54.968 204.800 - 206.317: 0.2449% ( 7) 00:16:54.968 206.317 - 207.834: 0.2522% ( 9) 00:16:54.968 207.834 - 209.351: 0.2643% ( 15) 00:16:54.968 209.351 - 210.868: 0.2732% ( 11) 00:16:54.968 210.868 - 212.385: 0.2789% ( 7) 00:16:54.968 212.385 - 213.902: 0.2862% ( 9) 00:16:54.968 213.902 - 215.419: 0.2967% ( 13) 00:16:54.968 215.419 - 216.936: 0.3031% ( 8) 00:16:54.968 216.936 - 218.453: 0.3096% ( 8) 00:16:54.968 218.453 - 219.970: 0.3145% ( 6) 00:16:54.968 219.970 - 221.487: 0.3201% ( 7) 00:16:54.968 221.487 - 223.004: 0.3266% ( 8) 00:16:54.968 223.004 - 224.521: 0.3339% ( 9) 00:16:54.968 224.521 - 226.039: 0.3403% ( 8) 00:16:54.968 226.039 - 227.556: 0.3492% ( 11) 00:16:54.968 227.556 - 229.073: 0.3597% ( 13) 00:16:54.968 229.073 - 230.590: 0.3718% ( 15) 00:16:54.968 230.590 - 232.107: 0.3775% ( 7) 00:16:54.968 232.107 - 233.624: 0.3888% ( 14) 00:16:54.968 233.624 - 235.141: 0.3977% ( 11) 00:16:54.968 235.141 - 236.658: 0.4050% ( 9) 00:16:54.968 236.658 - 238.175: 0.4082% ( 4) 00:16:54.968 238.175 - 239.692: 0.4171% ( 11) 00:16:54.968 239.692 - 241.209: 0.4276% ( 13) 00:16:54.968 241.209 - 242.726: 0.4414% ( 17) 00:16:54.968 242.726 - 244.243: 0.4478% ( 8) 00:16:54.968 244.243 - 245.760: 0.4543% ( 8) 00:16:54.968 245.760 - 247.277: 0.4632% ( 11) 00:16:54.968 247.277 - 248.794: 0.4705% ( 9) 00:16:54.968 248.794 - 250.311: 0.4769% ( 8) 00:16:54.968 250.311 - 251.828: 0.4883% ( 14) 00:16:54.968 251.828 - 253.345: 0.4955% ( 9) 00:16:54.968 253.345 - 254.862: 0.5085% ( 16) 00:16:54.968 254.862 - 256.379: 0.5190% ( 13) 00:16:54.968 256.379 - 257.896: 0.5254% ( 8) 00:16:54.968 257.896 - 259.413: 0.5343% ( 11) 00:16:54.968 259.413 - 260.930: 0.5448% ( 13) 00:16:54.968 260.930 - 262.447: 0.5529% ( 10) 00:16:54.968 262.447 - 263.964: 0.5602% ( 9) 00:16:54.968 263.964 - 265.481: 0.5659% ( 7) 00:16:54.968 265.481 - 266.999: 0.5723% ( 8) 00:16:54.968 266.999 - 268.516: 0.5780% ( 7) 00:16:54.968 268.516 - 270.033: 0.5844% ( 8) 00:16:54.968 270.033 - 271.550: 0.5950% ( 13) 00:16:54.968 271.550 - 273.067: 0.6022% ( 9) 00:16:54.968 273.067 - 274.584: 0.6135% ( 14) 00:16:54.968 274.584 - 276.101: 0.6257% ( 15) 00:16:54.968 276.101 - 277.618: 0.6313% ( 7) 00:16:54.968 277.618 - 279.135: 0.6418% ( 13) 00:16:54.968 279.135 - 280.652: 0.6523% ( 13) 00:16:54.969 280.652 - 282.169: 0.6645% ( 15) 00:16:54.969 282.169 - 283.686: 0.6726% ( 10) 00:16:54.969 283.686 - 285.203: 0.6814% ( 11) 00:16:54.969 285.203 - 286.720: 0.6895% ( 10) 00:16:54.969 286.720 - 288.237: 0.7081% ( 23) 00:16:54.969 288.237 - 289.754: 0.7275% ( 24) 00:16:54.969 289.754 - 291.271: 0.7388% ( 14) 00:16:54.969 291.271 - 292.788: 0.7502% ( 14) 00:16:54.969 292.788 - 294.305: 0.7671% ( 21) 00:16:54.969 294.305 - 295.822: 0.7793% ( 15) 00:16:54.969 295.822 - 297.339: 0.7906% ( 14) 00:16:54.969 297.339 - 298.856: 0.8019% ( 14) 00:16:54.969 298.856 - 300.373: 0.8132% ( 14) 00:16:54.969 300.373 - 301.890: 0.8261% ( 16) 00:16:54.969 301.890 - 303.407: 0.8391% ( 16) 00:16:54.969 303.407 - 304.924: 0.8536% ( 18) 00:16:54.969 304.924 - 306.441: 0.8666% ( 16) 00:16:54.969 306.441 - 307.959: 0.8827% ( 20) 00:16:54.969 307.959 - 309.476: 0.8940% ( 14) 00:16:54.969 309.476 - 310.993: 0.9070% ( 16) 00:16:54.969 310.993 - 312.510: 0.9223% ( 19) 00:16:54.969 312.510 - 314.027: 0.9337% ( 14) 00:16:54.969 314.027 - 315.544: 0.9506% ( 21) 00:16:54.969 315.544 - 317.061: 0.9660% ( 19) 00:16:54.969 317.061 - 318.578: 0.9862% ( 25) 00:16:54.969 318.578 - 320.095: 1.0040% ( 22) 00:16:54.969 320.095 - 321.612: 1.0282% ( 30) 00:16:54.969 321.612 - 323.129: 1.0371% ( 11) 00:16:54.969 323.129 - 324.646: 1.0549% ( 22) 00:16:54.969 324.646 - 326.163: 1.0727% ( 22) 00:16:54.969 326.163 - 327.680: 1.0856% ( 16) 00:16:54.969 327.680 - 329.197: 1.1026% ( 21) 00:16:54.969 329.197 - 330.714: 1.1188% ( 20) 00:16:54.969 330.714 - 332.231: 1.1374% ( 23) 00:16:54.969 332.231 - 333.748: 1.1519% ( 18) 00:16:54.969 333.748 - 335.265: 1.1705% ( 23) 00:16:54.969 335.265 - 336.782: 1.1842% ( 17) 00:16:54.969 336.782 - 338.299: 1.1996% ( 19) 00:16:54.969 338.299 - 339.816: 1.2190% ( 24) 00:16:54.969 339.816 - 341.333: 1.2368% ( 22) 00:16:54.969 341.333 - 342.850: 1.2562% ( 24) 00:16:54.969 342.850 - 344.367: 1.2740% ( 22) 00:16:54.969 344.367 - 345.884: 1.2869% ( 16) 00:16:54.969 345.884 - 347.401: 1.3031% ( 20) 00:16:54.969 347.401 - 348.919: 1.3201% ( 21) 00:16:54.969 348.919 - 350.436: 1.3370% ( 21) 00:16:54.969 350.436 - 351.953: 1.3548% ( 22) 00:16:54.969 351.953 - 353.470: 1.3702% ( 19) 00:16:54.969 353.470 - 354.987: 1.3880% ( 22) 00:16:54.969 354.987 - 356.504: 1.4057% ( 22) 00:16:54.969 356.504 - 358.021: 1.4260% ( 25) 00:16:54.969 358.021 - 359.538: 1.4405% ( 18) 00:16:54.969 359.538 - 361.055: 1.4607% ( 25) 00:16:54.969 361.055 - 362.572: 1.4890% ( 35) 00:16:54.969 362.572 - 364.089: 1.5173% ( 35) 00:16:54.969 364.089 - 365.606: 1.5391% ( 27) 00:16:54.969 365.606 - 367.123: 1.5626% ( 29) 00:16:54.969 367.123 - 368.640: 1.5812% ( 23) 00:16:54.969 368.640 - 370.157: 1.6094% ( 35) 00:16:54.969 370.157 - 371.674: 1.6313% ( 27) 00:16:54.969 371.674 - 373.191: 1.6555% ( 30) 00:16:54.969 373.191 - 374.708: 1.6806% ( 31) 00:16:54.969 374.708 - 376.225: 1.7089% ( 35) 00:16:54.969 376.225 - 377.742: 1.7283% ( 24) 00:16:54.969 377.742 - 379.259: 1.7550% ( 33) 00:16:54.969 379.259 - 380.776: 1.7824% ( 34) 00:16:54.969 380.776 - 382.293: 1.8107% ( 35) 00:16:54.969 382.293 - 383.810: 1.8334% ( 28) 00:16:54.969 383.810 - 385.327: 1.8560% ( 28) 00:16:54.969 385.327 - 386.844: 1.8673% ( 14) 00:16:54.969 386.844 - 388.361: 1.8916% ( 30) 00:16:54.969 388.361 - 391.396: 1.9360% ( 55) 00:16:54.969 391.396 - 394.430: 1.9894% ( 66) 00:16:54.969 394.430 - 397.464: 2.0435% ( 67) 00:16:54.969 397.464 - 400.498: 2.0896% ( 57) 00:16:54.969 400.498 - 403.532: 2.1438% ( 67) 00:16:54.969 403.532 - 406.566: 2.1955% ( 64) 00:16:54.969 406.566 - 409.600: 2.2432% ( 59) 00:16:54.969 409.600 - 412.634: 2.2901% ( 58) 00:16:54.969 412.634 - 415.668: 2.3612% ( 88) 00:16:54.969 415.668 - 418.702: 2.4057% ( 55) 00:16:54.969 418.702 - 421.736: 2.4687% ( 78) 00:16:54.969 421.736 - 424.770: 2.5277% ( 73) 00:16:54.969 424.770 - 427.804: 2.5932% ( 81) 00:16:54.969 427.804 - 430.839: 2.6433% ( 62) 00:16:54.969 430.839 - 433.873: 2.7040% ( 75) 00:16:54.969 433.873 - 436.907: 2.7646% ( 75) 00:16:54.969 436.907 - 439.941: 2.8171% ( 65) 00:16:54.969 439.941 - 442.975: 2.8867% ( 86) 00:16:54.969 442.975 - 446.009: 2.9659% ( 98) 00:16:54.969 446.009 - 449.043: 3.0257% ( 74) 00:16:54.969 449.043 - 452.077: 3.1033% ( 96) 00:16:54.969 452.077 - 455.111: 3.1696% ( 82) 00:16:54.969 455.111 - 458.145: 3.2601% ( 112) 00:16:54.969 458.145 - 461.179: 3.3523% ( 114) 00:16:54.969 461.179 - 464.213: 3.4234% ( 88) 00:16:54.969 464.213 - 467.247: 3.4986% ( 93) 00:16:54.969 467.247 - 470.281: 3.5762% ( 96) 00:16:54.969 470.281 - 473.316: 3.6506% ( 92) 00:16:54.969 473.316 - 476.350: 3.7387% ( 109) 00:16:54.969 476.350 - 479.384: 3.8211% ( 102) 00:16:54.969 479.384 - 482.418: 3.9036% ( 102) 00:16:54.969 482.418 - 485.452: 3.9755% ( 89) 00:16:54.969 485.452 - 488.486: 4.0475% ( 89) 00:16:54.969 488.486 - 491.520: 4.1315% ( 104) 00:16:54.969 491.520 - 494.554: 4.2310% ( 123) 00:16:54.969 494.554 - 497.588: 4.3199% ( 110) 00:16:54.969 497.588 - 500.622: 4.4185% ( 122) 00:16:54.969 500.622 - 503.656: 4.5196% ( 125) 00:16:54.969 503.656 - 506.690: 4.6117% ( 114) 00:16:54.969 506.690 - 509.724: 4.7119% ( 124) 00:16:54.969 509.724 - 512.759: 4.8162% ( 129) 00:16:54.969 512.759 - 515.793: 4.9221% ( 131) 00:16:54.969 515.793 - 518.827: 5.0409% ( 147) 00:16:54.969 518.827 - 521.861: 5.1379% ( 120) 00:16:54.969 521.861 - 524.895: 5.2301% ( 114) 00:16:54.969 524.895 - 527.929: 5.3457% ( 143) 00:16:54.969 527.929 - 530.963: 5.4572% ( 138) 00:16:54.969 530.963 - 533.997: 5.5672% ( 136) 00:16:54.969 533.997 - 537.031: 5.6820% ( 142) 00:16:54.969 537.031 - 540.065: 5.7846% ( 127) 00:16:54.969 540.065 - 543.099: 5.8905% ( 131) 00:16:54.969 543.099 - 546.133: 5.9997% ( 135) 00:16:54.969 546.133 - 549.167: 6.1047% ( 130) 00:16:54.969 549.167 - 552.201: 6.2341% ( 160) 00:16:54.969 552.201 - 555.236: 6.3432% ( 135) 00:16:54.970 555.236 - 558.270: 6.4726% ( 160) 00:16:54.970 558.270 - 561.304: 6.5914% ( 147) 00:16:54.970 561.304 - 564.338: 6.7110% ( 148) 00:16:54.970 564.338 - 567.372: 6.8509% ( 173) 00:16:54.970 567.372 - 570.406: 6.9915% ( 174) 00:16:54.970 570.406 - 573.440: 7.1176% ( 156) 00:16:54.970 573.440 - 576.474: 7.2542% ( 169) 00:16:54.970 576.474 - 579.508: 7.4062% ( 188) 00:16:54.970 579.508 - 582.542: 7.5388% ( 164) 00:16:54.970 582.542 - 585.576: 7.6673% ( 159) 00:16:54.970 585.576 - 588.610: 7.8047% ( 170) 00:16:54.970 588.610 - 591.644: 7.9325% ( 158) 00:16:54.970 591.644 - 594.679: 8.0666% ( 166) 00:16:54.970 594.679 - 597.713: 8.2291% ( 201) 00:16:54.970 597.713 - 600.747: 8.3690% ( 173) 00:16:54.970 600.747 - 603.781: 8.5201% ( 187) 00:16:54.970 603.781 - 606.815: 8.6705% ( 186) 00:16:54.970 606.815 - 609.849: 8.8208% ( 186) 00:16:54.970 609.849 - 612.883: 8.9728% ( 188) 00:16:54.970 612.883 - 615.917: 9.1296% ( 194) 00:16:54.970 615.917 - 618.951: 9.3156% ( 230) 00:16:54.970 618.951 - 621.985: 9.4627% ( 182) 00:16:54.970 621.985 - 625.019: 9.6373% ( 216) 00:16:54.970 625.019 - 628.053: 9.8022% ( 204) 00:16:54.970 628.053 - 631.087: 9.9728% ( 211) 00:16:54.970 631.087 - 634.121: 10.1320% ( 197) 00:16:54.970 634.121 - 637.156: 10.2896% ( 195) 00:16:54.970 637.156 - 640.190: 10.4642% ( 216) 00:16:54.970 640.190 - 643.224: 10.6550% ( 236) 00:16:54.970 643.224 - 646.258: 10.8426% ( 232) 00:16:54.970 646.258 - 649.292: 11.0067% ( 203) 00:16:54.970 649.292 - 652.326: 11.1902% ( 227) 00:16:54.970 652.326 - 655.360: 11.3963% ( 255) 00:16:54.970 655.360 - 658.394: 11.5515% ( 192) 00:16:54.970 658.394 - 661.428: 11.7366% ( 229) 00:16:54.970 661.428 - 664.462: 11.9258% ( 234) 00:16:54.970 664.462 - 667.496: 12.1311% ( 254) 00:16:54.970 667.496 - 670.530: 12.3065% ( 217) 00:16:54.970 670.530 - 673.564: 12.5167% ( 260) 00:16:54.970 673.564 - 676.599: 12.7091% ( 238) 00:16:54.970 676.599 - 679.633: 12.9079% ( 246) 00:16:54.970 679.633 - 682.667: 13.1278% ( 272) 00:16:54.970 682.667 - 685.701: 13.3194% ( 237) 00:16:54.970 685.701 - 688.735: 13.5425% ( 276) 00:16:54.970 688.735 - 691.769: 13.7341% ( 237) 00:16:54.970 691.769 - 694.803: 13.9459% ( 262) 00:16:54.970 694.803 - 697.837: 14.1908% ( 303) 00:16:54.970 697.837 - 700.871: 14.4050% ( 265) 00:16:54.970 700.871 - 703.905: 14.6208% ( 267) 00:16:54.970 703.905 - 706.939: 14.8536% ( 288) 00:16:54.970 706.939 - 709.973: 15.1050% ( 311) 00:16:54.970 709.973 - 713.007: 15.3185% ( 264) 00:16:54.970 713.007 - 716.041: 15.5383% ( 272) 00:16:54.970 716.041 - 719.076: 15.7849% ( 305) 00:16:54.970 719.076 - 722.110: 16.0233% ( 295) 00:16:54.970 722.110 - 725.144: 16.2626% ( 296) 00:16:54.970 725.144 - 728.178: 16.5051% ( 300) 00:16:54.970 728.178 - 731.212: 16.7468% ( 299) 00:16:54.970 731.212 - 734.246: 17.0168% ( 334) 00:16:54.970 734.246 - 737.280: 17.2706% ( 314) 00:16:54.970 737.280 - 740.314: 17.5487% ( 344) 00:16:54.970 740.314 - 743.348: 17.8220% ( 338) 00:16:54.970 743.348 - 746.382: 18.1202% ( 369) 00:16:54.970 746.382 - 749.416: 18.4161% ( 366) 00:16:54.970 749.416 - 752.450: 18.6877% ( 336) 00:16:54.970 752.450 - 755.484: 18.9609% ( 338) 00:16:54.970 755.484 - 758.519: 19.2390% ( 344) 00:16:54.970 758.519 - 761.553: 19.5098% ( 335) 00:16:54.970 761.553 - 764.587: 19.7887% ( 345) 00:16:54.970 764.587 - 767.621: 20.0708% ( 349) 00:16:54.970 767.621 - 770.655: 20.3586% ( 356) 00:16:54.970 770.655 - 773.689: 20.6472% ( 357) 00:16:54.970 773.689 - 776.723: 20.9309% ( 351) 00:16:54.970 776.723 - 782.791: 21.5000% ( 704) 00:16:54.970 782.791 - 788.859: 22.0780% ( 715) 00:16:54.970 788.859 - 794.927: 22.7150% ( 788) 00:16:54.970 794.927 - 800.996: 23.3398% ( 773) 00:16:54.970 800.996 - 807.064: 23.9679% ( 777) 00:16:54.970 807.064 - 813.132: 24.6203% ( 807) 00:16:54.970 813.132 - 819.200: 25.2613% ( 793) 00:16:54.970 819.200 - 825.268: 25.9201% ( 815) 00:16:54.970 825.268 - 831.336: 26.5644% ( 797) 00:16:54.970 831.336 - 837.404: 27.2717% ( 875) 00:16:54.970 837.404 - 843.473: 27.9588% ( 850) 00:16:54.970 843.473 - 849.541: 28.6653% ( 874) 00:16:54.970 849.541 - 855.609: 29.3597% ( 859) 00:16:54.970 855.609 - 861.677: 30.0646% ( 872) 00:16:54.970 861.677 - 867.745: 30.7646% ( 866) 00:16:54.970 867.745 - 873.813: 31.3984% ( 784) 00:16:54.970 873.813 - 879.881: 32.1356% ( 912) 00:16:54.970 879.881 - 885.950: 32.8138% ( 839) 00:16:54.970 885.950 - 892.018: 33.5252% ( 880) 00:16:54.970 892.018 - 898.086: 34.2349% ( 878) 00:16:54.970 898.086 - 904.154: 34.9657% ( 904) 00:16:54.970 904.154 - 910.222: 35.6649% ( 865) 00:16:54.970 910.222 - 916.290: 36.4070% ( 918) 00:16:54.970 916.290 - 922.359: 37.1458% ( 914) 00:16:54.970 922.359 - 928.427: 37.8459% ( 866) 00:16:54.970 928.427 - 934.495: 38.5653% ( 890) 00:16:54.970 934.495 - 940.563: 39.2629% ( 863) 00:16:54.970 940.563 - 946.631: 40.0188% ( 935) 00:16:54.970 946.631 - 952.699: 40.7665% ( 925) 00:16:54.970 952.699 - 958.767: 41.5328% ( 948) 00:16:54.970 958.767 - 964.836: 42.2466% ( 883) 00:16:54.970 964.836 - 970.904: 42.9790% ( 906) 00:16:54.970 970.904 - 976.972: 43.7267% ( 925) 00:16:54.970 976.972 - 983.040: 44.4898% ( 944) 00:16:54.970 983.040 - 989.108: 45.3062% ( 1010) 00:16:54.970 989.108 - 995.176: 46.0871% ( 966) 00:16:54.970 995.176 - 1001.244: 46.9060% ( 1013) 00:16:54.970 1001.244 - 1007.313: 47.6950% ( 976) 00:16:54.970 1007.313 - 1013.381: 48.4508% ( 935) 00:16:54.970 1013.381 - 1019.449: 49.2236% ( 956) 00:16:54.970 1019.449 - 1025.517: 49.9665% ( 919) 00:16:54.970 1025.517 - 1031.585: 50.7603% ( 982) 00:16:54.970 1031.585 - 1037.653: 51.5234% ( 944) 00:16:54.970 1037.653 - 1043.721: 52.2646% ( 917) 00:16:54.970 1043.721 - 1049.790: 52.9897% ( 897) 00:16:54.970 1049.790 - 1055.858: 53.7528% ( 944) 00:16:54.970 1055.858 - 1061.926: 54.5022% ( 927) 00:16:54.970 1061.926 - 1067.994: 55.2224% ( 891) 00:16:54.970 1067.994 - 1074.062: 55.9912% ( 951) 00:16:54.970 1074.062 - 1080.130: 56.7138% ( 894) 00:16:54.970 1080.130 - 1086.199: 57.4624% ( 926) 00:16:54.971 1086.199 - 1092.267: 58.1697% ( 875) 00:16:54.971 1092.267 - 1098.335: 58.9377% ( 950) 00:16:54.971 1098.335 - 1104.403: 59.6773% ( 915) 00:16:54.971 1104.403 - 1110.471: 60.3782% ( 867) 00:16:54.971 1110.471 - 1116.539: 61.1315% ( 932) 00:16:54.971 1116.539 - 1122.607: 61.8809% ( 927) 00:16:54.971 1122.607 - 1128.676: 62.6286% ( 925) 00:16:54.971 1128.676 - 1134.744: 63.3529% ( 896) 00:16:54.971 1134.744 - 1140.812: 64.0570% ( 871) 00:16:54.971 1140.812 - 1146.880: 64.7546% ( 863) 00:16:54.971 1146.880 - 1152.948: 65.4369% ( 844) 00:16:54.971 1152.948 - 1159.016: 66.1313% ( 859) 00:16:54.971 1159.016 - 1165.084: 66.8515% ( 891) 00:16:54.971 1165.084 - 1171.153: 67.5184% ( 825) 00:16:54.971 1171.153 - 1177.221: 68.1756% ( 813) 00:16:54.971 1177.221 - 1183.289: 68.8320% ( 812) 00:16:54.971 1183.289 - 1189.357: 69.4771% ( 798) 00:16:54.971 1189.357 - 1195.425: 70.0833% ( 750) 00:16:54.971 1195.425 - 1201.493: 70.7074% ( 772) 00:16:54.971 1201.493 - 1207.561: 71.2797% ( 708) 00:16:54.971 1207.561 - 1213.630: 71.8933% ( 759) 00:16:54.971 1213.630 - 1219.698: 72.5165% ( 771) 00:16:54.971 1219.698 - 1225.766: 73.1236% ( 751) 00:16:54.971 1225.766 - 1231.834: 73.7153% ( 732) 00:16:54.971 1231.834 - 1237.902: 74.2682% ( 684) 00:16:54.971 1237.902 - 1243.970: 74.8300% ( 695) 00:16:54.971 1243.970 - 1250.039: 75.3716% ( 670) 00:16:54.971 1250.039 - 1256.107: 75.9359% ( 698) 00:16:54.971 1256.107 - 1262.175: 76.4815% ( 675) 00:16:54.971 1262.175 - 1268.243: 77.0304% ( 679) 00:16:54.971 1268.243 - 1274.311: 77.5364% ( 626) 00:16:54.971 1274.311 - 1280.379: 78.0732% ( 664) 00:16:54.971 1280.379 - 1286.447: 78.6213% ( 678) 00:16:54.971 1286.447 - 1292.516: 79.1580% ( 664) 00:16:54.971 1292.516 - 1298.584: 79.6204% ( 572) 00:16:54.971 1298.584 - 1304.652: 80.1022% ( 596) 00:16:54.971 1304.652 - 1310.720: 80.5638% ( 571) 00:16:54.971 1310.720 - 1316.788: 81.0302% ( 577) 00:16:54.971 1316.788 - 1322.856: 81.4909% ( 570) 00:16:54.971 1322.856 - 1328.924: 81.9266% ( 539) 00:16:54.971 1328.924 - 1334.993: 82.3923% ( 576) 00:16:54.971 1334.993 - 1341.061: 82.8142% ( 522) 00:16:54.971 1341.061 - 1347.129: 83.2556% ( 546) 00:16:54.971 1347.129 - 1353.197: 83.6759% ( 520) 00:16:54.971 1353.197 - 1359.265: 84.1036% ( 529) 00:16:54.971 1359.265 - 1365.333: 84.5045% ( 496) 00:16:54.971 1365.333 - 1371.401: 84.9192% ( 513) 00:16:54.971 1371.401 - 1377.470: 85.3016% ( 473) 00:16:54.971 1377.470 - 1383.538: 85.7074% ( 502) 00:16:54.971 1383.538 - 1389.606: 86.0978% ( 483) 00:16:54.971 1389.606 - 1395.674: 86.5036% ( 502) 00:16:54.971 1395.674 - 1401.742: 86.9013% ( 492) 00:16:54.971 1401.742 - 1407.810: 87.3055% ( 500) 00:16:54.971 1407.810 - 1413.879: 87.7008% ( 489) 00:16:54.971 1413.879 - 1419.947: 88.0767% ( 465) 00:16:54.971 1419.947 - 1426.015: 88.4202% ( 425) 00:16:54.971 1426.015 - 1432.083: 88.7622% ( 423) 00:16:54.971 1432.083 - 1438.151: 89.1340% ( 460) 00:16:54.971 1438.151 - 1444.219: 89.4759% ( 423) 00:16:54.971 1444.219 - 1450.287: 89.8090% ( 412) 00:16:54.971 1450.287 - 1456.356: 90.1428% ( 413) 00:16:54.971 1456.356 - 1462.424: 90.4557% ( 387) 00:16:54.971 1462.424 - 1468.492: 90.7709% ( 390) 00:16:54.971 1468.492 - 1474.560: 91.0692% ( 369) 00:16:54.971 1474.560 - 1480.628: 91.3950% ( 403) 00:16:54.971 1480.628 - 1486.696: 91.6674% ( 337) 00:16:54.971 1486.696 - 1492.764: 91.9544% ( 355) 00:16:54.971 1492.764 - 1498.833: 92.2502% ( 366) 00:16:54.971 1498.833 - 1504.901: 92.5396% ( 358) 00:16:54.971 1504.901 - 1510.969: 92.8137% ( 339) 00:16:54.971 1510.969 - 1517.037: 93.0675% ( 314) 00:16:54.971 1517.037 - 1523.105: 93.3221% ( 315) 00:16:54.971 1523.105 - 1529.173: 93.5679% ( 304) 00:16:54.971 1529.173 - 1535.241: 93.8330% ( 328) 00:16:54.971 1535.241 - 1541.310: 94.0747% ( 299) 00:16:54.971 1541.310 - 1547.378: 94.3002% ( 279) 00:16:54.971 1547.378 - 1553.446: 94.5169% ( 268) 00:16:54.971 1553.446 - 1565.582: 94.9340% ( 516) 00:16:54.971 1565.582 - 1577.719: 95.3349% ( 496) 00:16:54.971 1577.719 - 1589.855: 95.7157% ( 471) 00:16:54.971 1589.855 - 1601.991: 96.0697% ( 438) 00:16:54.971 1601.991 - 1614.127: 96.4044% ( 414) 00:16:54.971 1614.127 - 1626.264: 96.7067% ( 374) 00:16:54.971 1626.264 - 1638.400: 96.9897% ( 350) 00:16:54.971 1638.400 - 1650.536: 97.2427% ( 313) 00:16:54.971 1650.536 - 1662.673: 97.4828% ( 297) 00:16:54.971 1662.673 - 1674.809: 97.7220% ( 296) 00:16:54.971 1674.809 - 1686.945: 97.9573% ( 291) 00:16:54.971 1686.945 - 1699.081: 98.1489% ( 237) 00:16:54.971 1699.081 - 1711.218: 98.3332% ( 228) 00:16:54.971 1711.218 - 1723.354: 98.4981% ( 204) 00:16:54.971 1723.354 - 1735.490: 98.6436% ( 180) 00:16:54.971 1735.490 - 1747.627: 98.7875% ( 178) 00:16:54.971 1747.627 - 1759.763: 98.9103% ( 152) 00:16:54.971 1759.763 - 1771.899: 99.0211% ( 137) 00:16:54.971 1771.899 - 1784.036: 99.1334% ( 139) 00:16:54.971 1784.036 - 1796.172: 99.2442% ( 137) 00:16:54.971 1796.172 - 1808.308: 99.3274% ( 103) 00:16:54.971 1808.308 - 1820.444: 99.3937% ( 82) 00:16:54.971 1820.444 - 1832.581: 99.4616% ( 84) 00:16:54.971 1832.581 - 1844.717: 99.5255% ( 79) 00:16:54.971 1844.717 - 1856.853: 99.5829% ( 71) 00:16:54.971 1856.853 - 1868.990: 99.6233% ( 50) 00:16:54.971 1868.990 - 1881.126: 99.6742% ( 63) 00:16:54.971 1881.126 - 1893.262: 99.7219% ( 59) 00:16:54.971 1893.262 - 1905.399: 99.7518% ( 37) 00:16:54.971 1905.399 - 1917.535: 99.7809% ( 36) 00:16:54.971 1917.535 - 1929.671: 99.8068% ( 32) 00:16:54.971 1929.671 - 1941.807: 99.8319% ( 31) 00:16:54.971 1941.807 - 1953.944: 99.8464% ( 18) 00:16:54.971 1953.944 - 1966.080: 99.8585% ( 15) 00:16:54.971 1966.080 - 1978.216: 99.8755% ( 21) 00:16:54.971 1978.216 - 1990.353: 99.8876% ( 15) 00:16:54.971 1990.353 - 2002.489: 99.8941% ( 8) 00:16:54.971 2002.489 - 2014.625: 99.8990% ( 6) 00:16:54.971 2014.625 - 2026.761: 99.9030% ( 5) 00:16:54.971 2026.761 - 2038.898: 99.9046% ( 2) 00:16:54.971 2038.898 - 2051.034: 99.9103% ( 7) 00:16:54.971 2063.170 - 2075.307: 99.9167% ( 8) 00:16:54.971 2075.307 - 2087.443: 99.9208% ( 5) 00:16:54.971 2087.443 - 2099.579: 99.9216% ( 1) 00:16:54.971 2123.852 - 2135.988: 99.9248% ( 4) 00:16:54.971 2135.988 - 2148.124: 99.9281% ( 4) 00:16:54.972 2148.124 - 2160.261: 99.9289% ( 1) 00:16:54.972 2160.261 - 2172.397: 99.9305% ( 2) 00:16:54.972 2172.397 - 2184.533: 99.9329% ( 3) 00:16:54.972 2184.533 - 2196.670: 99.9353% ( 3) 00:16:54.972 2196.670 - 2208.806: 99.9361% ( 1) 00:16:54.972 2208.806 - 2220.942: 99.9386% ( 3) 00:16:54.972 2220.942 - 2233.079: 99.9418% ( 4) 00:16:54.972 2245.215 - 2257.351: 99.9458% ( 5) 00:16:54.972 2257.351 - 2269.487: 99.9491% ( 4) 00:16:54.972 2269.487 - 2281.624: 99.9507% ( 2) 00:16:54.972 2281.624 - 2293.760: 99.9523% ( 2) 00:16:54.972 2293.760 - 2305.896: 99.9539% ( 2) 00:16:54.972 2305.896 - 2318.033: 99.9563% ( 3) 00:16:54.972 2330.169 - 2342.305: 99.9588% ( 3) 00:16:54.972 2342.305 - 2354.441: 99.9636% ( 6) 00:16:54.972 2354.441 - 2366.578: 99.9644% ( 1) 00:16:54.972 2366.578 - 2378.714: 99.9652% ( 1) 00:16:54.972 2378.714 - 2390.850: 99.9660% ( 1) 00:16:54.972 2390.850 - 2402.987: 99.9685% ( 3) 00:16:54.972 2402.987 - 2415.123: 99.9701% ( 2) 00:16:54.972 2415.123 - 2427.259: 99.9717% ( 2) 00:16:54.972 2427.259 - 2439.396: 99.9741% ( 3) 00:16:54.972 2439.396 - 2451.532: 99.9757% ( 2) 00:16:54.972 2463.668 - 2475.804: 99.9790% ( 4) 00:16:54.972 2475.804 - 2487.941: 99.9814% ( 3) 00:16:54.972 2512.213 - 2524.350: 99.9838% ( 3) 00:16:54.972 2524.350 - 2536.486: 99.9854% ( 2) 00:16:54.972 2548.622 - 2560.759: 99.9871% ( 2) 00:16:54.972 2560.759 - 2572.895: 99.9887% ( 2) 00:16:54.972 2585.031 - 2597.167: 99.9895% ( 1) 00:16:54.972 2597.167 - 2609.304: 99.9903% ( 1) 00:16:54.972 2609.304 - 2621.440: 99.9919% ( 2) 00:16:54.972 2633.576 - 2645.713: 99.9927% ( 1) 00:16:54.972 2657.849 - 2669.985: 99.9943% ( 2) 00:16:54.972 2669.985 - 2682.121: 99.9951% ( 1) 00:16:54.972 2682.121 - 2694.258: 99.9960% ( 1) 00:16:54.972 2706.394 - 2718.530: 99.9968% ( 1) 00:16:54.972 2718.530 - 2730.667: 99.9976% ( 1) 00:16:54.972 2742.803 - 2754.939: 99.9984% ( 1) 00:16:54.972 2754.939 - 2767.076: 100.0000% ( 2) 00:16:54.972 00:16:54.972 23:58:33 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:16:56.351 Initializing NVMe Controllers 00:16:56.351 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:16:56.351 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:16:56.351 Initialization complete. Launching workers. 00:16:56.351 ======================================================== 00:16:56.351 Latency(us) 00:16:56.351 Device Information : IOPS MiB/s Average min max 00:16:56.351 PCIE (0000:84:00.0) NSID 1 from core 0: 93496.85 1095.67 1369.04 13.91 3545.80 00:16:56.351 ======================================================== 00:16:56.351 Total : 93496.85 1095.67 1369.04 13.91 3545.80 00:16:56.351 00:16:56.351 Summary latency data for PCIE (0000:84:00.0) NSID 1 from core 0: 00:16:56.351 ================================================================================= 00:16:56.351 1.00000% : 15.265us 00:16:56.351 10.00000% : 46.839us 00:16:56.351 25.00000% : 292.788us 00:16:56.351 50.00000% : 1243.970us 00:16:56.351 75.00000% : 2318.033us 00:16:56.351 90.00000% : 2936.984us 00:16:56.351 95.00000% : 3070.483us 00:16:56.351 98.00000% : 3179.710us 00:16:56.351 99.00000% : 3228.255us 00:16:56.351 99.50000% : 3276.800us 00:16:56.351 99.90000% : 3373.890us 00:16:56.351 99.99000% : 3519.526us 00:16:56.351 99.99900% : 3568.071us 00:16:56.351 99.99990% : 3568.071us 00:16:56.351 99.99999% : 3568.071us 00:16:56.351 00:16:56.351 Latency histogram for PCIE (0000:84:00.0) NSID 1 from core 0: 00:16:56.351 ============================================================================== 00:16:56.351 Range in us Cumulative IO count 00:16:56.351 13.843 - 13.938: 0.0021% ( 2) 00:16:56.351 13.938 - 14.033: 0.0203% ( 17) 00:16:56.351 14.033 - 14.127: 0.0428% ( 21) 00:16:56.351 14.127 - 14.222: 0.0952% ( 49) 00:16:56.351 14.222 - 14.317: 0.1219% ( 25) 00:16:56.351 14.317 - 14.412: 0.1551% ( 31) 00:16:56.351 14.412 - 14.507: 0.2278% ( 68) 00:16:56.351 14.507 - 14.601: 0.3080% ( 75) 00:16:56.351 14.601 - 14.696: 0.3497% ( 39) 00:16:56.351 14.696 - 14.791: 0.3721% ( 21) 00:16:56.351 14.791 - 14.886: 0.3999% ( 26) 00:16:56.351 14.886 - 14.981: 0.4192% ( 18) 00:16:56.351 14.981 - 15.076: 0.5283% ( 102) 00:16:56.351 15.076 - 15.170: 0.9453% ( 390) 00:16:56.351 15.170 - 15.265: 1.6875% ( 694) 00:16:56.351 15.265 - 15.360: 1.9131% ( 211) 00:16:56.351 15.360 - 15.455: 2.0201% ( 100) 00:16:56.351 15.455 - 15.550: 2.1569% ( 128) 00:16:56.351 15.550 - 15.644: 2.2895% ( 124) 00:16:56.351 15.644 - 15.739: 2.4403% ( 141) 00:16:56.351 15.739 - 15.834: 2.4852% ( 42) 00:16:56.351 15.834 - 15.929: 2.5066% ( 20) 00:16:56.351 15.929 - 16.024: 2.5312% ( 23) 00:16:56.351 16.024 - 16.119: 2.5462% ( 14) 00:16:56.351 16.119 - 16.213: 2.5783% ( 30) 00:16:56.351 16.213 - 16.308: 2.6232% ( 42) 00:16:56.351 16.308 - 16.403: 2.6874% ( 60) 00:16:56.351 16.403 - 16.498: 2.7259% ( 36) 00:16:56.351 16.498 - 16.593: 2.7537% ( 26) 00:16:56.351 16.593 - 16.687: 2.7654% ( 11) 00:16:56.351 16.687 - 16.782: 2.7815% ( 15) 00:16:56.351 16.782 - 16.877: 2.8125% ( 29) 00:16:56.351 16.877 - 16.972: 2.8724% ( 56) 00:16:56.351 16.972 - 17.067: 2.9419% ( 65) 00:16:56.351 17.067 - 17.161: 2.9750% ( 31) 00:16:56.351 17.161 - 17.256: 2.9911% ( 15) 00:16:56.351 17.256 - 17.351: 3.0018% ( 10) 00:16:56.351 17.351 - 17.446: 3.0146% ( 12) 00:16:56.351 17.446 - 17.541: 3.0424% ( 26) 00:16:56.351 17.541 - 17.636: 3.0787% ( 34) 00:16:56.351 17.636 - 17.730: 3.0894% ( 10) 00:16:56.351 17.730 - 17.825: 3.0969% ( 7) 00:16:56.351 17.825 - 17.920: 3.1055% ( 8) 00:16:56.351 17.920 - 18.015: 3.1162% ( 10) 00:16:56.351 18.015 - 18.110: 3.1418% ( 24) 00:16:56.351 18.110 - 18.204: 3.1793% ( 35) 00:16:56.351 18.204 - 18.299: 3.2049% ( 24) 00:16:56.351 18.299 - 18.394: 3.2135% ( 8) 00:16:56.351 18.394 - 18.489: 3.2210% ( 7) 00:16:56.351 18.489 - 18.584: 3.2295% ( 8) 00:16:56.351 18.584 - 18.679: 3.2531% ( 22) 00:16:56.351 18.679 - 18.773: 3.2702% ( 16) 00:16:56.351 18.773 - 18.868: 3.2787% ( 8) 00:16:56.351 18.868 - 18.963: 3.2851% ( 6) 00:16:56.351 18.963 - 19.058: 3.2894% ( 4) 00:16:56.351 19.058 - 19.153: 3.3001% ( 10) 00:16:56.351 19.153 - 19.247: 3.3279% ( 26) 00:16:56.351 19.247 - 19.342: 3.3546% ( 25) 00:16:56.351 19.342 - 19.437: 3.3675% ( 12) 00:16:56.351 19.437 - 19.532: 3.3792% ( 11) 00:16:56.351 19.532 - 19.627: 3.3835% ( 4) 00:16:56.351 19.627 - 19.721: 3.3921% ( 8) 00:16:56.351 19.721 - 19.816: 3.4028% ( 10) 00:16:56.352 19.816 - 19.911: 3.4124% ( 9) 00:16:56.352 19.911 - 20.006: 3.4199% ( 7) 00:16:56.352 20.006 - 20.101: 3.4252% ( 5) 00:16:56.352 20.101 - 20.196: 3.4274% ( 2) 00:16:56.352 20.196 - 20.290: 3.4327% ( 5) 00:16:56.352 20.290 - 20.385: 3.4370% ( 4) 00:16:56.352 20.385 - 20.480: 3.4413% ( 4) 00:16:56.352 20.480 - 20.575: 3.4423% ( 1) 00:16:56.352 20.575 - 20.670: 3.4455% ( 3) 00:16:56.352 20.670 - 20.764: 3.4466% ( 1) 00:16:56.352 20.764 - 20.859: 3.4477% ( 1) 00:16:56.352 20.859 - 20.954: 3.4509% ( 3) 00:16:56.352 20.954 - 21.049: 3.4562% ( 5) 00:16:56.352 21.049 - 21.144: 3.4594% ( 3) 00:16:56.352 21.144 - 21.239: 3.4616% ( 2) 00:16:56.352 21.239 - 21.333: 3.4627% ( 1) 00:16:56.352 21.333 - 21.428: 3.4637% ( 1) 00:16:56.352 21.428 - 21.523: 3.4659% ( 2) 00:16:56.352 21.523 - 21.618: 3.4680% ( 2) 00:16:56.352 21.618 - 21.713: 3.4701% ( 2) 00:16:56.352 21.713 - 21.807: 3.4734% ( 3) 00:16:56.352 21.807 - 21.902: 3.4744% ( 1) 00:16:56.352 21.902 - 21.997: 3.4755% ( 1) 00:16:56.352 22.092 - 22.187: 3.4787% ( 3) 00:16:56.352 22.187 - 22.281: 3.4798% ( 1) 00:16:56.352 22.471 - 22.566: 3.4819% ( 2) 00:16:56.352 22.566 - 22.661: 3.4830% ( 1) 00:16:56.352 22.661 - 22.756: 3.4851% ( 2) 00:16:56.352 22.756 - 22.850: 3.4873% ( 2) 00:16:56.352 22.850 - 22.945: 3.4883% ( 1) 00:16:56.352 22.945 - 23.040: 3.4894% ( 1) 00:16:56.352 23.040 - 23.135: 3.4915% ( 2) 00:16:56.352 23.135 - 23.230: 3.4947% ( 3) 00:16:56.352 23.230 - 23.324: 3.4969% ( 2) 00:16:56.352 23.419 - 23.514: 3.5012% ( 4) 00:16:56.352 23.704 - 23.799: 3.5033% ( 2) 00:16:56.352 23.799 - 23.893: 3.5065% ( 3) 00:16:56.352 23.893 - 23.988: 3.5086% ( 2) 00:16:56.352 23.988 - 24.083: 3.5097% ( 1) 00:16:56.352 24.083 - 24.178: 3.5108% ( 1) 00:16:56.352 24.178 - 24.273: 3.5118% ( 1) 00:16:56.352 24.273 - 24.462: 3.5151% ( 3) 00:16:56.352 24.462 - 24.652: 3.5354% ( 19) 00:16:56.352 24.652 - 24.841: 3.5493% ( 13) 00:16:56.352 24.841 - 25.031: 3.5621% ( 12) 00:16:56.352 25.031 - 25.221: 3.5792% ( 16) 00:16:56.352 25.221 - 25.410: 3.5921% ( 12) 00:16:56.352 25.410 - 25.600: 3.6049% ( 12) 00:16:56.352 25.600 - 25.790: 3.6177% ( 12) 00:16:56.352 25.790 - 25.979: 3.6348% ( 16) 00:16:56.352 25.979 - 26.169: 3.6594% ( 23) 00:16:56.352 26.169 - 26.359: 3.6851% ( 24) 00:16:56.352 26.359 - 26.548: 3.6990% ( 13) 00:16:56.352 26.548 - 26.738: 3.7086% ( 9) 00:16:56.352 26.738 - 26.927: 3.7236% ( 14) 00:16:56.352 26.927 - 27.117: 3.7364% ( 12) 00:16:56.352 27.117 - 27.307: 3.7471% ( 10) 00:16:56.352 27.307 - 27.496: 3.7589% ( 11) 00:16:56.352 27.496 - 27.686: 3.7664% ( 7) 00:16:56.352 27.686 - 27.876: 3.7738% ( 7) 00:16:56.352 27.876 - 28.065: 3.7867% ( 12) 00:16:56.352 28.065 - 28.255: 3.8017% ( 14) 00:16:56.352 28.255 - 28.444: 3.8123% ( 10) 00:16:56.352 28.444 - 28.634: 3.8241% ( 11) 00:16:56.352 28.634 - 28.824: 3.8391% ( 14) 00:16:56.352 28.824 - 29.013: 3.8530% ( 13) 00:16:56.352 29.013 - 29.203: 3.8690% ( 15) 00:16:56.352 29.203 - 29.393: 3.8872% ( 17) 00:16:56.352 29.393 - 29.582: 3.9129% ( 24) 00:16:56.352 29.582 - 29.772: 3.9204% ( 7) 00:16:56.352 29.772 - 29.961: 3.9321% ( 11) 00:16:56.352 29.961 - 30.151: 3.9375% ( 5) 00:16:56.352 30.151 - 30.341: 3.9503% ( 12) 00:16:56.352 30.341 - 30.530: 3.9674% ( 16) 00:16:56.352 30.530 - 30.720: 3.9920% ( 23) 00:16:56.352 30.720 - 30.910: 4.0102% ( 17) 00:16:56.352 30.910 - 31.099: 4.0305% ( 19) 00:16:56.352 31.099 - 31.289: 4.0604% ( 28) 00:16:56.352 31.289 - 31.479: 4.1064% ( 43) 00:16:56.352 31.479 - 31.668: 4.1717% ( 61) 00:16:56.352 31.668 - 31.858: 4.2572% ( 80) 00:16:56.352 31.858 - 32.047: 4.4155% ( 148) 00:16:56.352 32.047 - 32.237: 4.7438% ( 307) 00:16:56.352 32.237 - 32.427: 4.9876% ( 228) 00:16:56.352 32.427 - 32.616: 5.2389% ( 235) 00:16:56.352 32.616 - 32.806: 5.5041% ( 248) 00:16:56.352 32.806 - 32.996: 5.7148% ( 197) 00:16:56.352 32.996 - 33.185: 5.9041% ( 177) 00:16:56.352 33.185 - 33.375: 6.0923% ( 176) 00:16:56.352 33.375 - 33.564: 6.2302% ( 129) 00:16:56.352 33.564 - 33.754: 6.3211% ( 85) 00:16:56.352 33.754 - 33.944: 6.4035% ( 77) 00:16:56.352 33.944 - 34.133: 6.4708% ( 63) 00:16:56.352 34.133 - 34.323: 6.5724% ( 95) 00:16:56.352 34.323 - 34.513: 6.6868% ( 107) 00:16:56.352 34.513 - 34.702: 6.7724% ( 80) 00:16:56.352 34.702 - 34.892: 6.8173% ( 42) 00:16:56.352 34.892 - 35.081: 6.8708% ( 50) 00:16:56.352 35.081 - 35.271: 6.9168% ( 43) 00:16:56.352 35.271 - 35.461: 6.9670% ( 47) 00:16:56.352 35.461 - 35.650: 7.0408% ( 69) 00:16:56.352 35.650 - 35.840: 7.1018% ( 57) 00:16:56.352 35.840 - 36.030: 7.1510% ( 46) 00:16:56.352 36.030 - 36.219: 7.2098% ( 55) 00:16:56.352 36.219 - 36.409: 7.2707% ( 57) 00:16:56.352 36.409 - 36.599: 7.3606% ( 84) 00:16:56.352 36.599 - 36.788: 7.5017% ( 132) 00:16:56.352 36.788 - 36.978: 7.5926% ( 85) 00:16:56.352 36.978 - 37.167: 7.6814% ( 83) 00:16:56.352 37.167 - 37.357: 7.7562% ( 70) 00:16:56.352 37.357 - 37.547: 7.7904% ( 32) 00:16:56.352 37.547 - 37.736: 7.8493% ( 55) 00:16:56.352 37.736 - 37.926: 7.9134% ( 60) 00:16:56.352 37.926 - 38.116: 8.0022% ( 83) 00:16:56.352 38.116 - 38.305: 8.1209% ( 111) 00:16:56.352 38.305 - 38.495: 8.2203% ( 93) 00:16:56.352 38.495 - 38.684: 8.2824% ( 58) 00:16:56.352 38.684 - 38.874: 8.3262% ( 41) 00:16:56.352 38.874 - 39.064: 8.3583% ( 30) 00:16:56.352 39.064 - 39.253: 8.3946% ( 34) 00:16:56.352 39.253 - 39.443: 8.4374% ( 40) 00:16:56.352 39.443 - 39.633: 8.4748% ( 35) 00:16:56.352 39.633 - 39.822: 8.5144% ( 37) 00:16:56.352 39.822 - 40.012: 8.5390% ( 23) 00:16:56.352 40.012 - 40.201: 8.5647% ( 24) 00:16:56.352 40.201 - 40.391: 8.5903% ( 24) 00:16:56.352 40.391 - 40.581: 8.6235% ( 31) 00:16:56.352 40.581 - 40.770: 8.6556% ( 30) 00:16:56.352 40.770 - 40.960: 8.6705% ( 14) 00:16:56.352 40.960 - 41.150: 8.6962% ( 24) 00:16:56.352 41.150 - 41.339: 8.7165% ( 19) 00:16:56.352 41.339 - 41.529: 8.7454% ( 27) 00:16:56.352 41.529 - 41.719: 8.7796% ( 32) 00:16:56.352 41.719 - 41.908: 8.8021% ( 21) 00:16:56.352 41.908 - 42.098: 8.8416% ( 37) 00:16:56.352 42.098 - 42.287: 8.9197% ( 73) 00:16:56.352 42.287 - 42.477: 8.9967% ( 72) 00:16:56.352 42.477 - 42.667: 9.0790% ( 77) 00:16:56.352 42.667 - 42.856: 9.1507% ( 67) 00:16:56.352 42.856 - 43.046: 9.2202% ( 65) 00:16:56.352 43.046 - 43.236: 9.2705% ( 47) 00:16:56.352 43.236 - 43.425: 9.3389% ( 64) 00:16:56.352 43.425 - 43.615: 9.4052% ( 62) 00:16:56.352 43.615 - 43.804: 9.4865% ( 76) 00:16:56.352 43.804 - 43.994: 9.5303% ( 41) 00:16:56.352 43.994 - 44.184: 9.5571% ( 25) 00:16:56.352 44.184 - 44.373: 9.5998% ( 40) 00:16:56.352 44.373 - 44.563: 9.6330% ( 31) 00:16:56.352 44.563 - 44.753: 9.6758% ( 40) 00:16:56.352 44.753 - 44.942: 9.7057% ( 28) 00:16:56.352 44.942 - 45.132: 9.7356% ( 28) 00:16:56.352 45.132 - 45.321: 9.7602% ( 23) 00:16:56.352 45.321 - 45.511: 9.7880% ( 26) 00:16:56.352 45.511 - 45.701: 9.8233% ( 33) 00:16:56.352 45.701 - 45.890: 9.8522% ( 27) 00:16:56.352 45.890 - 46.080: 9.8982% ( 43) 00:16:56.352 46.080 - 46.270: 9.9367% ( 36) 00:16:56.352 46.270 - 46.459: 9.9656% ( 27) 00:16:56.352 46.459 - 46.649: 9.9934% ( 26) 00:16:56.352 46.649 - 46.839: 10.0115% ( 17) 00:16:56.352 46.839 - 47.028: 10.0351% ( 22) 00:16:56.352 47.028 - 47.218: 10.0714% ( 34) 00:16:56.352 47.218 - 47.407: 10.1014% ( 28) 00:16:56.352 47.407 - 47.597: 10.1238% ( 21) 00:16:56.352 47.597 - 47.787: 10.1506% ( 25) 00:16:56.352 47.787 - 47.976: 10.1891% ( 36) 00:16:56.352 47.976 - 48.166: 10.2340% ( 42) 00:16:56.352 48.166 - 48.356: 10.2725% ( 36) 00:16:56.352 48.356 - 48.545: 10.3110% ( 36) 00:16:56.352 48.545 - 48.924: 10.4136% ( 96) 00:16:56.352 48.924 - 49.304: 10.5708% ( 147) 00:16:56.352 49.304 - 49.683: 10.7291% ( 148) 00:16:56.352 49.683 - 50.062: 10.8425% ( 106) 00:16:56.352 50.062 - 50.441: 10.9334% ( 85) 00:16:56.352 50.441 - 50.821: 11.0574% ( 116) 00:16:56.352 50.821 - 51.200: 11.2007% ( 134) 00:16:56.352 51.200 - 51.579: 11.3151% ( 107) 00:16:56.352 51.579 - 51.959: 11.4017% ( 81) 00:16:56.352 51.959 - 52.338: 11.5012% ( 93) 00:16:56.352 52.338 - 52.717: 11.5921% ( 85) 00:16:56.352 52.717 - 53.096: 11.6873% ( 89) 00:16:56.352 53.096 - 53.476: 11.8295% ( 133) 00:16:56.352 53.476 - 53.855: 12.1717% ( 320) 00:16:56.352 53.855 - 54.234: 12.4711% ( 280) 00:16:56.352 54.234 - 54.613: 12.6283% ( 147) 00:16:56.352 54.613 - 54.993: 12.7567% ( 120) 00:16:56.352 54.993 - 55.372: 12.9096% ( 143) 00:16:56.352 55.372 - 55.751: 13.0090% ( 93) 00:16:56.352 55.751 - 56.130: 13.1256% ( 109) 00:16:56.352 56.130 - 56.510: 13.2133% ( 82) 00:16:56.352 56.510 - 56.889: 13.2849% ( 67) 00:16:56.352 56.889 - 57.268: 13.3555% ( 66) 00:16:56.352 57.268 - 57.647: 13.4186% ( 59) 00:16:56.352 57.647 - 58.027: 13.4539% ( 33) 00:16:56.352 58.027 - 58.406: 13.4913% ( 35) 00:16:56.352 58.406 - 58.785: 13.5448% ( 50) 00:16:56.352 58.785 - 59.164: 13.5940% ( 46) 00:16:56.353 59.164 - 59.544: 13.6549% ( 57) 00:16:56.353 59.544 - 59.923: 13.7148% ( 56) 00:16:56.353 59.923 - 60.302: 13.7704% ( 52) 00:16:56.353 60.302 - 60.681: 13.8699% ( 93) 00:16:56.353 60.681 - 61.061: 13.9661% ( 90) 00:16:56.353 61.061 - 61.440: 14.0164% ( 47) 00:16:56.353 61.440 - 61.819: 14.0624% ( 43) 00:16:56.353 61.819 - 62.199: 14.1073% ( 42) 00:16:56.353 62.199 - 62.578: 14.1682% ( 57) 00:16:56.353 62.578 - 62.957: 14.2303% ( 58) 00:16:56.353 62.957 - 63.336: 14.2912% ( 57) 00:16:56.353 63.336 - 63.716: 14.3650% ( 69) 00:16:56.353 63.716 - 64.095: 14.4377% ( 68) 00:16:56.353 64.095 - 64.474: 14.4890% ( 48) 00:16:56.353 64.474 - 64.853: 14.5714% ( 77) 00:16:56.353 64.853 - 65.233: 14.6773% ( 99) 00:16:56.353 65.233 - 65.612: 14.7649% ( 82) 00:16:56.353 65.612 - 65.991: 14.8334% ( 64) 00:16:56.353 65.991 - 66.370: 14.9264% ( 87) 00:16:56.353 66.370 - 66.750: 15.0205% ( 88) 00:16:56.353 66.750 - 67.129: 15.0922% ( 67) 00:16:56.353 67.129 - 67.508: 15.1606% ( 64) 00:16:56.353 67.508 - 67.887: 15.2344% ( 69) 00:16:56.353 67.887 - 68.267: 15.3093% ( 70) 00:16:56.353 68.267 - 68.646: 15.3638% ( 51) 00:16:56.353 68.646 - 69.025: 15.4269% ( 59) 00:16:56.353 69.025 - 69.404: 15.4761% ( 46) 00:16:56.353 69.404 - 69.784: 15.5392% ( 59) 00:16:56.353 69.784 - 70.163: 15.6344% ( 89) 00:16:56.353 70.163 - 70.542: 15.7638% ( 121) 00:16:56.353 70.542 - 70.921: 15.9038% ( 131) 00:16:56.353 70.921 - 71.301: 16.0375% ( 125) 00:16:56.353 71.301 - 71.680: 16.1466% ( 102) 00:16:56.353 71.680 - 72.059: 16.2771% ( 122) 00:16:56.353 72.059 - 72.439: 16.3979% ( 113) 00:16:56.353 72.439 - 72.818: 16.4888% ( 85) 00:16:56.353 72.818 - 73.197: 16.5915% ( 96) 00:16:56.353 73.197 - 73.576: 16.7037% ( 105) 00:16:56.353 73.576 - 73.956: 16.7626% ( 55) 00:16:56.353 73.956 - 74.335: 16.8267% ( 60) 00:16:56.353 74.335 - 74.714: 16.8599% ( 31) 00:16:56.353 74.714 - 75.093: 16.9059% ( 43) 00:16:56.353 75.093 - 75.473: 16.9422% ( 34) 00:16:56.353 75.473 - 75.852: 16.9850% ( 40) 00:16:56.353 75.852 - 76.231: 17.0246% ( 37) 00:16:56.353 76.231 - 76.610: 17.0620% ( 35) 00:16:56.353 76.610 - 76.990: 17.1080% ( 43) 00:16:56.353 76.990 - 77.369: 17.1593% ( 48) 00:16:56.353 77.369 - 77.748: 17.2481% ( 83) 00:16:56.353 77.748 - 78.127: 17.3090% ( 57) 00:16:56.353 78.127 - 78.507: 17.3443% ( 33) 00:16:56.353 78.507 - 78.886: 17.3956% ( 48) 00:16:56.353 78.886 - 79.265: 17.4277% ( 30) 00:16:56.353 79.265 - 79.644: 17.4769% ( 46) 00:16:56.353 79.644 - 80.024: 17.5101% ( 31) 00:16:56.353 80.024 - 80.403: 17.5443% ( 32) 00:16:56.353 80.403 - 80.782: 17.5913% ( 44) 00:16:56.353 80.782 - 81.161: 17.6512% ( 56) 00:16:56.353 81.161 - 81.541: 17.7218% ( 66) 00:16:56.353 81.541 - 81.920: 17.8298% ( 101) 00:16:56.353 81.920 - 82.299: 17.9581% ( 120) 00:16:56.353 82.299 - 82.679: 18.0330% ( 70) 00:16:56.353 82.679 - 83.058: 18.1303% ( 91) 00:16:56.353 83.058 - 83.437: 18.2276% ( 91) 00:16:56.353 83.437 - 83.816: 18.3303% ( 96) 00:16:56.353 83.816 - 84.196: 18.4041% ( 69) 00:16:56.353 84.196 - 84.575: 18.4575% ( 50) 00:16:56.353 84.575 - 84.954: 18.5249% ( 63) 00:16:56.353 84.954 - 85.333: 18.5623% ( 35) 00:16:56.353 85.333 - 85.713: 18.5805% ( 17) 00:16:56.353 85.713 - 86.092: 18.6051% ( 23) 00:16:56.353 86.092 - 86.471: 18.6404% ( 33) 00:16:56.353 86.471 - 86.850: 18.6618% ( 20) 00:16:56.353 86.850 - 87.230: 18.6971% ( 33) 00:16:56.353 87.230 - 87.609: 18.7409% ( 41) 00:16:56.353 87.609 - 87.988: 18.7944% ( 50) 00:16:56.353 87.988 - 88.367: 18.8382% ( 41) 00:16:56.353 88.367 - 88.747: 18.8735% ( 33) 00:16:56.353 88.747 - 89.126: 18.9099% ( 34) 00:16:56.353 89.126 - 89.505: 18.9526% ( 40) 00:16:56.353 89.505 - 89.884: 18.9911% ( 36) 00:16:56.353 89.884 - 90.264: 19.0211% ( 28) 00:16:56.353 90.264 - 90.643: 19.0489% ( 26) 00:16:56.353 90.643 - 91.022: 19.0863% ( 35) 00:16:56.353 91.022 - 91.401: 19.1270% ( 38) 00:16:56.353 91.401 - 91.781: 19.1665% ( 37) 00:16:56.353 91.781 - 92.160: 19.1879% ( 20) 00:16:56.353 92.160 - 92.539: 19.2136% ( 24) 00:16:56.353 92.539 - 92.919: 19.2457% ( 30) 00:16:56.353 92.919 - 93.298: 19.2745% ( 27) 00:16:56.353 93.298 - 93.677: 19.2991% ( 23) 00:16:56.353 93.677 - 94.056: 19.3109% ( 11) 00:16:56.353 94.056 - 94.436: 19.3323% ( 20) 00:16:56.353 94.436 - 94.815: 19.3579% ( 24) 00:16:56.353 94.815 - 95.194: 19.3825% ( 23) 00:16:56.353 95.194 - 95.573: 19.4050% ( 21) 00:16:56.353 95.573 - 95.953: 19.4307% ( 24) 00:16:56.353 95.953 - 96.332: 19.4542% ( 22) 00:16:56.353 96.332 - 96.711: 19.4745% ( 19) 00:16:56.353 96.711 - 97.090: 19.5044% ( 28) 00:16:56.353 97.090 - 97.849: 19.5643% ( 56) 00:16:56.353 97.849 - 98.607: 19.6178% ( 50) 00:16:56.353 98.607 - 99.366: 19.6980% ( 75) 00:16:56.353 99.366 - 100.124: 19.7857% ( 82) 00:16:56.353 100.124 - 100.883: 19.8894% ( 97) 00:16:56.353 100.883 - 101.641: 19.9654% ( 71) 00:16:56.353 101.641 - 102.400: 20.0456% ( 75) 00:16:56.353 102.400 - 103.159: 20.1033% ( 54) 00:16:56.353 103.159 - 103.917: 20.1557% ( 49) 00:16:56.353 103.917 - 104.676: 20.2038% ( 45) 00:16:56.353 104.676 - 105.434: 20.2552% ( 48) 00:16:56.353 105.434 - 106.193: 20.3097% ( 51) 00:16:56.353 106.193 - 106.951: 20.3664% ( 53) 00:16:56.353 106.951 - 107.710: 20.4359% ( 65) 00:16:56.353 107.710 - 108.468: 20.4861% ( 47) 00:16:56.353 108.468 - 109.227: 20.5450% ( 55) 00:16:56.353 109.227 - 109.985: 20.6230% ( 73) 00:16:56.353 109.985 - 110.744: 20.6626% ( 37) 00:16:56.353 110.744 - 111.502: 20.7011% ( 36) 00:16:56.353 111.502 - 112.261: 20.7524% ( 48) 00:16:56.353 112.261 - 113.019: 20.7952% ( 40) 00:16:56.353 113.019 - 113.778: 20.8209% ( 24) 00:16:56.353 113.778 - 114.536: 20.8818% ( 57) 00:16:56.353 114.536 - 115.295: 20.9438% ( 58) 00:16:56.353 115.295 - 116.053: 21.0037% ( 56) 00:16:56.353 116.053 - 116.812: 21.0604% ( 53) 00:16:56.353 116.812 - 117.570: 21.1181% ( 54) 00:16:56.353 117.570 - 118.329: 21.1641% ( 43) 00:16:56.353 118.329 - 119.087: 21.2112% ( 44) 00:16:56.353 119.087 - 119.846: 21.2550% ( 41) 00:16:56.353 119.846 - 120.604: 21.2925% ( 35) 00:16:56.353 120.604 - 121.363: 21.3374% ( 42) 00:16:56.353 121.363 - 122.121: 21.4079% ( 66) 00:16:56.353 122.121 - 122.880: 21.4464% ( 36) 00:16:56.353 122.880 - 123.639: 21.4828% ( 34) 00:16:56.353 123.639 - 124.397: 21.5213% ( 36) 00:16:56.353 124.397 - 125.156: 21.5491% ( 26) 00:16:56.353 125.156 - 125.914: 21.5758% ( 25) 00:16:56.353 125.914 - 126.673: 21.6143% ( 36) 00:16:56.353 126.673 - 127.431: 21.6389% ( 23) 00:16:56.353 127.431 - 128.190: 21.6785% ( 37) 00:16:56.353 128.190 - 128.948: 21.7106% ( 30) 00:16:56.353 128.948 - 129.707: 21.7373% ( 25) 00:16:56.353 129.707 - 130.465: 21.7651% ( 26) 00:16:56.353 130.465 - 131.224: 21.7983% ( 31) 00:16:56.353 131.224 - 131.982: 21.8304% ( 30) 00:16:56.353 131.982 - 132.741: 21.8774% ( 44) 00:16:56.353 132.741 - 133.499: 21.9394% ( 58) 00:16:56.353 133.499 - 134.258: 22.0089% ( 65) 00:16:56.353 134.258 - 135.016: 22.0571% ( 45) 00:16:56.353 135.016 - 135.775: 22.0977% ( 38) 00:16:56.353 135.775 - 136.533: 22.1319% ( 32) 00:16:56.353 136.533 - 137.292: 22.1640% ( 30) 00:16:56.353 137.292 - 138.050: 22.2068% ( 40) 00:16:56.353 138.050 - 138.809: 22.2389% ( 30) 00:16:56.353 138.809 - 139.567: 22.3041% ( 61) 00:16:56.353 139.567 - 140.326: 22.3511% ( 44) 00:16:56.353 140.326 - 141.084: 22.3704% ( 18) 00:16:56.353 141.084 - 141.843: 22.3982% ( 26) 00:16:56.353 141.843 - 142.601: 22.4228% ( 23) 00:16:56.353 142.601 - 143.360: 22.4420% ( 18) 00:16:56.353 143.360 - 144.119: 22.4624% ( 19) 00:16:56.353 144.119 - 144.877: 22.4902% ( 26) 00:16:56.353 144.877 - 145.636: 22.5137% ( 22) 00:16:56.353 145.636 - 146.394: 22.5543% ( 38) 00:16:56.353 146.394 - 147.153: 22.5800% ( 24) 00:16:56.353 147.153 - 147.911: 22.6153% ( 33) 00:16:56.353 147.911 - 148.670: 22.6495% ( 32) 00:16:56.353 148.670 - 149.428: 22.6891% ( 37) 00:16:56.353 149.428 - 150.187: 22.7318% ( 40) 00:16:56.353 150.187 - 150.945: 22.7703% ( 36) 00:16:56.353 150.945 - 151.704: 22.7960% ( 24) 00:16:56.353 151.704 - 152.462: 22.8142% ( 17) 00:16:56.353 152.462 - 153.221: 22.8345% ( 19) 00:16:56.353 153.221 - 153.979: 22.8698% ( 33) 00:16:56.353 153.979 - 154.738: 22.9062% ( 34) 00:16:56.353 154.738 - 155.496: 22.9329% ( 25) 00:16:56.353 155.496 - 156.255: 22.9810% ( 45) 00:16:56.353 156.255 - 157.013: 23.0195% ( 36) 00:16:56.353 157.013 - 157.772: 23.0516% ( 30) 00:16:56.353 157.772 - 158.530: 23.0655% ( 13) 00:16:56.353 158.530 - 159.289: 23.0890% ( 22) 00:16:56.353 159.289 - 160.047: 23.1147% ( 24) 00:16:56.353 160.047 - 160.806: 23.1414% ( 25) 00:16:56.353 160.806 - 161.564: 23.1607% ( 18) 00:16:56.353 161.564 - 162.323: 23.1692% ( 8) 00:16:56.353 162.323 - 163.081: 23.1821% ( 12) 00:16:56.353 163.081 - 163.840: 23.2034% ( 20) 00:16:56.353 163.840 - 164.599: 23.2248% ( 20) 00:16:56.353 164.599 - 165.357: 23.2612% ( 34) 00:16:56.353 165.357 - 166.116: 23.2836% ( 21) 00:16:56.353 166.116 - 166.874: 23.2922% ( 8) 00:16:56.353 166.874 - 167.633: 23.3221% ( 28) 00:16:56.353 167.633 - 168.391: 23.3382% ( 15) 00:16:56.353 168.391 - 169.150: 23.3585% ( 19) 00:16:56.354 169.150 - 169.908: 23.3756% ( 16) 00:16:56.354 169.908 - 170.667: 23.4002% ( 23) 00:16:56.354 170.667 - 171.425: 23.4237% ( 22) 00:16:56.354 171.425 - 172.184: 23.4430% ( 18) 00:16:56.354 172.184 - 172.942: 23.4590% ( 15) 00:16:56.354 172.942 - 173.701: 23.4783% ( 18) 00:16:56.354 173.701 - 174.459: 23.4964% ( 17) 00:16:56.354 174.459 - 175.218: 23.5136% ( 16) 00:16:56.354 175.218 - 175.976: 23.5210% ( 7) 00:16:56.354 175.976 - 176.735: 23.5317% ( 10) 00:16:56.354 176.735 - 177.493: 23.5478% ( 15) 00:16:56.354 177.493 - 178.252: 23.5553% ( 7) 00:16:56.354 178.252 - 179.010: 23.5745% ( 18) 00:16:56.354 179.010 - 179.769: 23.5991% ( 23) 00:16:56.354 179.769 - 180.527: 23.6055% ( 6) 00:16:56.354 180.527 - 181.286: 23.6248% ( 18) 00:16:56.354 181.286 - 182.044: 23.6397% ( 14) 00:16:56.354 182.044 - 182.803: 23.6750% ( 33) 00:16:56.354 182.803 - 183.561: 23.6921% ( 16) 00:16:56.354 183.561 - 184.320: 23.7060% ( 13) 00:16:56.354 184.320 - 185.079: 23.7221% ( 15) 00:16:56.354 185.079 - 185.837: 23.7424% ( 19) 00:16:56.354 185.837 - 186.596: 23.7510% ( 8) 00:16:56.354 186.596 - 187.354: 23.7691% ( 17) 00:16:56.354 187.354 - 188.113: 23.7873% ( 17) 00:16:56.354 188.113 - 188.871: 23.8141% ( 25) 00:16:56.354 188.871 - 189.630: 23.8333% ( 18) 00:16:56.354 189.630 - 190.388: 23.8397% ( 6) 00:16:56.354 190.388 - 191.147: 23.8483% ( 8) 00:16:56.354 191.147 - 191.905: 23.8590% ( 10) 00:16:56.354 191.905 - 192.664: 23.8739% ( 14) 00:16:56.354 192.664 - 193.422: 23.8900% ( 15) 00:16:56.354 193.422 - 194.181: 23.9017% ( 11) 00:16:56.354 194.181 - 195.698: 23.9221% ( 19) 00:16:56.354 195.698 - 197.215: 23.9285% ( 6) 00:16:56.354 197.215 - 198.732: 23.9509% ( 21) 00:16:56.354 198.732 - 200.249: 23.9852% ( 32) 00:16:56.354 200.249 - 201.766: 24.0333% ( 45) 00:16:56.354 201.766 - 203.283: 24.0664% ( 31) 00:16:56.354 203.283 - 204.800: 24.1017% ( 33) 00:16:56.354 204.800 - 206.317: 24.1231% ( 20) 00:16:56.354 206.317 - 207.834: 24.1402% ( 16) 00:16:56.354 207.834 - 209.351: 24.1584% ( 17) 00:16:56.354 209.351 - 210.868: 24.1830% ( 23) 00:16:56.354 210.868 - 212.385: 24.2087% ( 24) 00:16:56.354 212.385 - 213.902: 24.2194% ( 10) 00:16:56.354 213.902 - 215.419: 24.2354% ( 15) 00:16:56.354 215.419 - 216.936: 24.2632% ( 26) 00:16:56.354 216.936 - 218.453: 24.2931% ( 28) 00:16:56.354 218.453 - 219.970: 24.3124% ( 18) 00:16:56.354 219.970 - 221.487: 24.3306% ( 17) 00:16:56.354 221.487 - 223.004: 24.3541% ( 22) 00:16:56.354 223.004 - 224.521: 24.3712% ( 16) 00:16:56.354 224.521 - 226.039: 24.3883% ( 16) 00:16:56.354 226.039 - 227.556: 24.4086% ( 19) 00:16:56.354 227.556 - 229.073: 24.4225% ( 13) 00:16:56.354 229.073 - 230.590: 24.4343% ( 11) 00:16:56.354 230.590 - 232.107: 24.4503% ( 15) 00:16:56.354 232.107 - 233.624: 24.4610% ( 10) 00:16:56.354 233.624 - 235.141: 24.4696% ( 8) 00:16:56.354 235.141 - 236.658: 24.4792% ( 9) 00:16:56.354 236.658 - 238.175: 24.4931% ( 13) 00:16:56.354 238.175 - 239.692: 24.5124% ( 18) 00:16:56.354 239.692 - 241.209: 24.5284% ( 15) 00:16:56.354 241.209 - 242.726: 24.5530% ( 23) 00:16:56.354 242.726 - 244.243: 24.5658% ( 12) 00:16:56.354 244.243 - 245.760: 24.5787% ( 12) 00:16:56.354 245.760 - 247.277: 24.5851% ( 6) 00:16:56.354 247.277 - 248.794: 24.5979% ( 12) 00:16:56.354 248.794 - 250.311: 24.6193% ( 20) 00:16:56.354 250.311 - 251.828: 24.6364% ( 16) 00:16:56.354 251.828 - 253.345: 24.6525% ( 15) 00:16:56.354 253.345 - 254.862: 24.6674% ( 14) 00:16:56.354 254.862 - 256.379: 24.6835% ( 15) 00:16:56.354 256.379 - 257.896: 24.7006% ( 16) 00:16:56.354 257.896 - 259.413: 24.7059% ( 5) 00:16:56.354 259.413 - 260.930: 24.7166% ( 10) 00:16:56.354 260.930 - 262.447: 24.7273% ( 10) 00:16:56.354 262.447 - 263.964: 24.7359% ( 8) 00:16:56.354 263.964 - 265.481: 24.7444% ( 8) 00:16:56.354 265.481 - 266.999: 24.7722% ( 26) 00:16:56.354 266.999 - 268.516: 24.7851% ( 12) 00:16:56.354 268.516 - 270.033: 24.7957% ( 10) 00:16:56.354 270.033 - 271.550: 24.8075% ( 11) 00:16:56.354 271.550 - 273.067: 24.8289% ( 20) 00:16:56.354 273.067 - 274.584: 24.8364% ( 7) 00:16:56.354 274.584 - 276.101: 24.8481% ( 11) 00:16:56.354 276.101 - 277.618: 24.8663% ( 17) 00:16:56.354 277.618 - 279.135: 24.8813% ( 14) 00:16:56.354 279.135 - 280.652: 24.8899% ( 8) 00:16:56.354 280.652 - 282.169: 24.9080% ( 17) 00:16:56.354 282.169 - 283.686: 24.9251% ( 16) 00:16:56.354 283.686 - 285.203: 24.9487% ( 22) 00:16:56.354 285.203 - 286.720: 24.9529% ( 4) 00:16:56.354 286.720 - 288.237: 24.9647% ( 11) 00:16:56.354 288.237 - 289.754: 24.9754% ( 10) 00:16:56.354 289.754 - 291.271: 24.9979% ( 21) 00:16:56.354 291.271 - 292.788: 25.0075% ( 9) 00:16:56.354 292.788 - 294.305: 25.0171% ( 9) 00:16:56.354 294.305 - 295.822: 25.0267% ( 9) 00:16:56.354 295.822 - 297.339: 25.0374% ( 10) 00:16:56.354 297.339 - 298.856: 25.0406% ( 3) 00:16:56.354 298.856 - 300.373: 25.0524% ( 11) 00:16:56.354 300.373 - 301.890: 25.0577% ( 5) 00:16:56.354 301.890 - 303.407: 25.0684% ( 10) 00:16:56.354 303.407 - 304.924: 25.0834% ( 14) 00:16:56.354 304.924 - 306.441: 25.0941% ( 10) 00:16:56.354 306.441 - 307.959: 25.1059% ( 11) 00:16:56.354 307.959 - 309.476: 25.1198% ( 13) 00:16:56.354 309.476 - 310.993: 25.1273% ( 7) 00:16:56.354 310.993 - 312.510: 25.1390% ( 11) 00:16:56.354 312.510 - 314.027: 25.1529% ( 13) 00:16:56.354 314.027 - 315.544: 25.1625% ( 9) 00:16:56.354 315.544 - 317.061: 25.1711% ( 8) 00:16:56.354 317.061 - 318.578: 25.1743% ( 3) 00:16:56.354 318.578 - 320.095: 25.1850% ( 10) 00:16:56.354 320.095 - 321.612: 25.1978% ( 12) 00:16:56.354 321.612 - 323.129: 25.2096% ( 11) 00:16:56.354 323.129 - 324.646: 25.2288% ( 18) 00:16:56.354 324.646 - 326.163: 25.2374% ( 8) 00:16:56.354 326.163 - 327.680: 25.2567% ( 18) 00:16:56.354 327.680 - 329.197: 25.2706% ( 13) 00:16:56.354 329.197 - 330.714: 25.2845% ( 13) 00:16:56.354 330.714 - 332.231: 25.2962% ( 11) 00:16:56.354 332.231 - 333.748: 25.3176% ( 20) 00:16:56.354 333.748 - 335.265: 25.3369% ( 18) 00:16:56.354 335.265 - 336.782: 25.3454% ( 8) 00:16:56.354 336.782 - 338.299: 25.3550% ( 9) 00:16:56.354 338.299 - 339.816: 25.3689% ( 13) 00:16:56.354 339.816 - 341.333: 25.3818% ( 12) 00:16:56.354 341.333 - 342.850: 25.3957% ( 13) 00:16:56.354 342.850 - 344.367: 25.4074% ( 11) 00:16:56.354 344.367 - 345.884: 25.4278% ( 19) 00:16:56.354 345.884 - 347.401: 25.4384% ( 10) 00:16:56.354 347.401 - 348.919: 25.4556% ( 16) 00:16:56.354 348.919 - 350.436: 25.4737% ( 17) 00:16:56.354 350.436 - 351.953: 25.4983% ( 23) 00:16:56.354 351.953 - 353.470: 25.5358% ( 35) 00:16:56.354 353.470 - 354.987: 25.5432% ( 7) 00:16:56.354 354.987 - 356.504: 25.5636% ( 19) 00:16:56.354 356.504 - 358.021: 25.5721% ( 8) 00:16:56.354 358.021 - 359.538: 25.5785% ( 6) 00:16:56.354 359.538 - 361.055: 25.5956% ( 16) 00:16:56.354 361.055 - 362.572: 25.6074% ( 11) 00:16:56.354 362.572 - 364.089: 25.6277% ( 19) 00:16:56.354 364.089 - 365.606: 25.6320% ( 4) 00:16:56.354 365.606 - 367.123: 25.6491% ( 16) 00:16:56.354 367.123 - 368.640: 25.6737% ( 23) 00:16:56.354 368.640 - 370.157: 25.6823% ( 8) 00:16:56.354 370.157 - 371.674: 25.6930% ( 10) 00:16:56.354 371.674 - 373.191: 25.7069% ( 13) 00:16:56.354 373.191 - 374.708: 25.7208% ( 13) 00:16:56.354 374.708 - 376.225: 25.7282% ( 7) 00:16:56.354 376.225 - 377.742: 25.7400% ( 11) 00:16:56.354 377.742 - 379.259: 25.7518% ( 11) 00:16:56.354 379.259 - 380.776: 25.7689% ( 16) 00:16:56.355 380.776 - 382.293: 25.7913% ( 21) 00:16:56.355 382.293 - 383.810: 25.8020% ( 10) 00:16:56.355 383.810 - 385.327: 25.8309% ( 27) 00:16:56.355 385.327 - 386.844: 25.8373% ( 6) 00:16:56.355 386.844 - 388.361: 25.8459% ( 8) 00:16:56.355 388.361 - 391.396: 25.8694% ( 22) 00:16:56.355 391.396 - 394.430: 25.8929% ( 22) 00:16:56.355 394.430 - 397.464: 25.9239% ( 29) 00:16:56.355 397.464 - 400.498: 25.9453% ( 20) 00:16:56.355 400.498 - 403.532: 25.9828% ( 35) 00:16:56.355 403.532 - 406.566: 26.0031% ( 19) 00:16:56.355 406.566 - 409.600: 26.0213% ( 17) 00:16:56.355 409.600 - 412.634: 26.0405% ( 18) 00:16:56.355 412.634 - 415.668: 26.0747% ( 32) 00:16:56.355 415.668 - 418.702: 26.0940% ( 18) 00:16:56.355 418.702 - 421.736: 26.1154% ( 20) 00:16:56.355 421.736 - 424.770: 26.1368% ( 20) 00:16:56.355 424.770 - 427.804: 26.1795% ( 40) 00:16:56.355 427.804 - 430.839: 26.1966% ( 16) 00:16:56.355 430.839 - 433.873: 26.2276% ( 29) 00:16:56.355 433.873 - 436.907: 26.2480% ( 19) 00:16:56.355 436.907 - 439.941: 26.2779% ( 28) 00:16:56.355 439.941 - 442.975: 26.3121% ( 32) 00:16:56.355 442.975 - 446.009: 26.3367% ( 23) 00:16:56.355 446.009 - 449.043: 26.3635% ( 25) 00:16:56.355 449.043 - 452.077: 26.3827% ( 18) 00:16:56.355 452.077 - 455.111: 26.4159% ( 31) 00:16:56.355 455.111 - 458.145: 26.4479% ( 30) 00:16:56.355 458.145 - 461.179: 26.4822% ( 32) 00:16:56.355 461.179 - 464.213: 26.5196% ( 35) 00:16:56.355 464.213 - 467.247: 26.5474% ( 26) 00:16:56.355 467.247 - 470.281: 26.5720% ( 23) 00:16:56.355 470.281 - 473.316: 26.5966% ( 23) 00:16:56.355 473.316 - 476.350: 26.6308% ( 32) 00:16:56.355 476.350 - 479.384: 26.6490% ( 17) 00:16:56.355 479.384 - 482.418: 26.6853% ( 34) 00:16:56.355 482.418 - 485.452: 26.7099% ( 23) 00:16:56.355 485.452 - 488.486: 26.7410% ( 29) 00:16:56.355 488.486 - 491.520: 26.7516% ( 10) 00:16:56.355 491.520 - 494.554: 26.7848% ( 31) 00:16:56.355 494.554 - 497.588: 26.8137% ( 27) 00:16:56.355 497.588 - 500.622: 26.8415% ( 26) 00:16:56.355 500.622 - 503.656: 26.8618% ( 19) 00:16:56.355 503.656 - 506.690: 26.8992% ( 35) 00:16:56.355 506.690 - 509.724: 26.9270% ( 26) 00:16:56.355 509.724 - 512.759: 26.9548% ( 26) 00:16:56.355 512.759 - 515.793: 26.9869% ( 30) 00:16:56.355 515.793 - 518.827: 27.0254% ( 36) 00:16:56.355 518.827 - 521.861: 27.0414% ( 15) 00:16:56.355 521.861 - 524.895: 27.0714% ( 28) 00:16:56.355 524.895 - 527.929: 27.1024% ( 29) 00:16:56.355 527.929 - 530.963: 27.1323% ( 28) 00:16:56.355 530.963 - 533.997: 27.1602% ( 26) 00:16:56.355 533.997 - 537.031: 27.1847% ( 23) 00:16:56.355 537.031 - 540.065: 27.2307% ( 43) 00:16:56.355 540.065 - 543.099: 27.2585% ( 26) 00:16:56.355 543.099 - 546.133: 27.2970% ( 36) 00:16:56.355 546.133 - 549.167: 27.3377% ( 38) 00:16:56.355 549.167 - 552.201: 27.3687% ( 29) 00:16:56.355 552.201 - 555.236: 27.4179% ( 46) 00:16:56.355 555.236 - 558.270: 27.4446% ( 25) 00:16:56.355 558.270 - 561.304: 27.4788% ( 32) 00:16:56.355 561.304 - 564.338: 27.4938% ( 14) 00:16:56.355 564.338 - 567.372: 27.5184% ( 23) 00:16:56.355 567.372 - 570.406: 27.5601% ( 39) 00:16:56.355 570.406 - 573.440: 27.5943% ( 32) 00:16:56.355 573.440 - 576.474: 27.6371% ( 40) 00:16:56.355 576.474 - 579.508: 27.6713% ( 32) 00:16:56.355 579.508 - 582.542: 27.6927% ( 20) 00:16:56.355 582.542 - 585.576: 27.7205% ( 26) 00:16:56.355 585.576 - 588.610: 27.7569% ( 34) 00:16:56.355 588.610 - 591.644: 27.7729% ( 15) 00:16:56.355 591.644 - 594.679: 27.8050% ( 30) 00:16:56.355 594.679 - 597.713: 27.8360% ( 29) 00:16:56.355 597.713 - 600.747: 27.8745% ( 36) 00:16:56.355 600.747 - 603.781: 27.8970% ( 21) 00:16:56.355 603.781 - 606.815: 27.9301% ( 31) 00:16:56.355 606.815 - 609.849: 27.9568% ( 25) 00:16:56.355 609.849 - 612.883: 27.9975% ( 38) 00:16:56.355 612.883 - 615.917: 28.0392% ( 39) 00:16:56.355 615.917 - 618.951: 28.0873% ( 45) 00:16:56.355 618.951 - 621.985: 28.1376% ( 47) 00:16:56.355 621.985 - 625.019: 28.1675% ( 28) 00:16:56.355 625.019 - 628.053: 28.2017% ( 32) 00:16:56.355 628.053 - 631.087: 28.2627% ( 57) 00:16:56.355 631.087 - 634.121: 28.2883% ( 24) 00:16:56.355 634.121 - 637.156: 28.3322% ( 41) 00:16:56.355 637.156 - 640.190: 28.3835% ( 48) 00:16:56.355 640.190 - 643.224: 28.4177% ( 32) 00:16:56.355 643.224 - 646.258: 28.4423% ( 23) 00:16:56.355 646.258 - 649.292: 28.4744% ( 30) 00:16:56.355 649.292 - 652.326: 28.5022% ( 26) 00:16:56.355 652.326 - 655.360: 28.5429% ( 38) 00:16:56.355 655.360 - 658.394: 28.5953% ( 49) 00:16:56.355 658.394 - 661.428: 28.6733% ( 73) 00:16:56.355 661.428 - 664.462: 28.7268% ( 50) 00:16:56.355 664.462 - 667.496: 28.7525% ( 24) 00:16:56.355 667.496 - 670.530: 28.7952% ( 40) 00:16:56.355 670.530 - 673.564: 28.8380% ( 40) 00:16:56.355 673.564 - 676.599: 28.9000% ( 58) 00:16:56.355 676.599 - 679.633: 28.9471% ( 44) 00:16:56.355 679.633 - 682.667: 28.9877% ( 38) 00:16:56.355 682.667 - 685.701: 29.0294% ( 39) 00:16:56.355 685.701 - 688.735: 29.0701% ( 38) 00:16:56.355 688.735 - 691.769: 29.1193% ( 46) 00:16:56.355 691.769 - 694.803: 29.1898% ( 66) 00:16:56.355 694.803 - 697.837: 29.2412% ( 48) 00:16:56.355 697.837 - 700.871: 29.2850% ( 41) 00:16:56.355 700.871 - 703.905: 29.3321% ( 44) 00:16:56.355 703.905 - 706.939: 29.3748% ( 40) 00:16:56.355 706.939 - 709.973: 29.4165% ( 39) 00:16:56.355 709.973 - 713.007: 29.4550% ( 36) 00:16:56.355 713.007 - 716.041: 29.4893% ( 32) 00:16:56.355 716.041 - 719.076: 29.5246% ( 33) 00:16:56.355 719.076 - 722.110: 29.5802% ( 52) 00:16:56.355 722.110 - 725.144: 29.6155% ( 33) 00:16:56.355 725.144 - 728.178: 29.6700% ( 51) 00:16:56.355 728.178 - 731.212: 29.7224% ( 49) 00:16:56.355 731.212 - 734.246: 29.7759% ( 50) 00:16:56.355 734.246 - 737.280: 29.8464% ( 66) 00:16:56.355 737.280 - 740.314: 29.9138% ( 63) 00:16:56.355 740.314 - 743.348: 30.0111% ( 91) 00:16:56.355 743.348 - 746.382: 30.0571% ( 43) 00:16:56.355 746.382 - 749.416: 30.1181% ( 57) 00:16:56.355 749.416 - 752.450: 30.1673% ( 46) 00:16:56.355 752.450 - 755.484: 30.2614% ( 88) 00:16:56.355 755.484 - 758.519: 30.3116% ( 47) 00:16:56.355 758.519 - 761.553: 30.3982% ( 81) 00:16:56.355 761.553 - 764.587: 30.4613% ( 59) 00:16:56.355 764.587 - 767.621: 30.5458% ( 79) 00:16:56.355 767.621 - 770.655: 30.6110% ( 61) 00:16:56.355 770.655 - 773.689: 30.7126% ( 95) 00:16:56.355 773.689 - 776.723: 30.8046% ( 86) 00:16:56.355 776.723 - 782.791: 30.9853% ( 169) 00:16:56.355 782.791 - 788.859: 31.1308% ( 136) 00:16:56.355 788.859 - 794.927: 31.3168% ( 174) 00:16:56.355 794.927 - 800.996: 31.5104% ( 181) 00:16:56.355 800.996 - 807.064: 31.6965% ( 174) 00:16:56.355 807.064 - 813.132: 31.9414% ( 229) 00:16:56.355 813.132 - 819.200: 32.1670% ( 211) 00:16:56.355 819.200 - 825.268: 32.4097% ( 227) 00:16:56.355 825.268 - 831.336: 32.6193% ( 196) 00:16:56.355 831.336 - 837.404: 32.8642% ( 229) 00:16:56.355 837.404 - 843.473: 33.1369% ( 255) 00:16:56.355 843.473 - 849.541: 33.3904% ( 237) 00:16:56.355 849.541 - 855.609: 33.6673% ( 259) 00:16:56.355 855.609 - 861.677: 33.9251% ( 241) 00:16:56.355 861.677 - 867.745: 34.2084% ( 265) 00:16:56.355 867.745 - 873.813: 34.4416% ( 218) 00:16:56.355 873.813 - 879.881: 34.6800% ( 223) 00:16:56.355 879.881 - 885.950: 34.9474% ( 250) 00:16:56.355 885.950 - 892.018: 35.1645% ( 203) 00:16:56.355 892.018 - 898.086: 35.4457% ( 263) 00:16:56.355 898.086 - 904.154: 35.6981% ( 236) 00:16:56.355 904.154 - 910.222: 35.9943% ( 277) 00:16:56.355 910.222 - 916.290: 36.2028% ( 195) 00:16:56.355 916.290 - 922.359: 36.4456% ( 227) 00:16:56.355 922.359 - 928.427: 36.7161% ( 253) 00:16:56.355 928.427 - 934.495: 36.9739% ( 241) 00:16:56.355 934.495 - 940.563: 37.2081% ( 219) 00:16:56.355 940.563 - 946.631: 37.4273% ( 205) 00:16:56.355 946.631 - 952.699: 37.6893% ( 245) 00:16:56.355 952.699 - 958.767: 37.9320% ( 227) 00:16:56.355 958.767 - 964.836: 38.1919% ( 243) 00:16:56.355 964.836 - 970.904: 38.4550% ( 246) 00:16:56.355 970.904 - 976.972: 38.7266% ( 254) 00:16:56.355 976.972 - 983.040: 38.9811% ( 238) 00:16:56.355 983.040 - 989.108: 39.1982% ( 203) 00:16:56.355 989.108 - 995.176: 39.4751% ( 259) 00:16:56.355 995.176 - 1001.244: 39.7842% ( 289) 00:16:56.355 1001.244 - 1007.313: 40.0419% ( 241) 00:16:56.355 1007.313 - 1013.381: 40.3114% ( 252) 00:16:56.355 1013.381 - 1019.449: 40.6119% ( 281) 00:16:56.355 1019.449 - 1025.517: 40.8846% ( 255) 00:16:56.355 1025.517 - 1031.585: 41.1412% ( 240) 00:16:56.355 1031.585 - 1037.653: 41.4150% ( 256) 00:16:56.355 1037.653 - 1043.721: 41.6727% ( 241) 00:16:56.355 1043.721 - 1049.790: 41.9059% ( 218) 00:16:56.355 1049.790 - 1055.858: 42.1507% ( 229) 00:16:56.355 1055.858 - 1061.926: 42.4512% ( 281) 00:16:56.355 1061.926 - 1067.994: 42.7924% ( 319) 00:16:56.355 1067.994 - 1074.062: 43.0394% ( 231) 00:16:56.355 1074.062 - 1080.130: 43.3410% ( 282) 00:16:56.355 1080.130 - 1086.199: 43.6158% ( 257) 00:16:56.355 1086.199 - 1092.267: 43.8746% ( 242) 00:16:56.355 1092.267 - 1098.335: 44.0810% ( 193) 00:16:56.355 1098.335 - 1104.403: 44.3152% ( 219) 00:16:56.355 1104.403 - 1110.471: 44.6018% ( 268) 00:16:56.355 1110.471 - 1116.539: 44.8798% ( 260) 00:16:56.355 1116.539 - 1122.607: 45.1771% ( 278) 00:16:56.356 1122.607 - 1128.676: 45.4081% ( 216) 00:16:56.356 1128.676 - 1134.744: 45.7011% ( 274) 00:16:56.356 1134.744 - 1140.812: 45.9396% ( 223) 00:16:56.356 1140.812 - 1146.880: 46.2187% ( 261) 00:16:56.356 1146.880 - 1152.948: 46.5192% ( 281) 00:16:56.356 1152.948 - 1159.016: 46.7726% ( 237) 00:16:56.356 1159.016 - 1165.084: 47.0271% ( 238) 00:16:56.356 1165.084 - 1171.153: 47.2752% ( 232) 00:16:56.356 1171.153 - 1177.221: 47.5554% ( 262) 00:16:56.356 1177.221 - 1183.289: 47.8516% ( 277) 00:16:56.356 1183.289 - 1189.357: 48.0879% ( 221) 00:16:56.356 1189.357 - 1195.425: 48.3425% ( 238) 00:16:56.356 1195.425 - 1201.493: 48.5531% ( 197) 00:16:56.356 1201.493 - 1207.561: 48.8215% ( 251) 00:16:56.356 1207.561 - 1213.630: 49.0408% ( 205) 00:16:56.356 1213.630 - 1219.698: 49.2728% ( 217) 00:16:56.356 1219.698 - 1225.766: 49.5263% ( 237) 00:16:56.356 1225.766 - 1231.834: 49.7540% ( 213) 00:16:56.356 1231.834 - 1237.902: 49.9840% ( 215) 00:16:56.356 1237.902 - 1243.970: 50.2417% ( 241) 00:16:56.356 1243.970 - 1250.039: 50.4438% ( 189) 00:16:56.356 1250.039 - 1256.107: 50.6619% ( 204) 00:16:56.356 1256.107 - 1262.175: 50.8373% ( 164) 00:16:56.356 1262.175 - 1268.243: 51.0373% ( 187) 00:16:56.356 1268.243 - 1274.311: 51.2373% ( 187) 00:16:56.356 1274.311 - 1280.379: 51.4811% ( 228) 00:16:56.356 1280.379 - 1286.447: 51.7527% ( 254) 00:16:56.356 1286.447 - 1292.516: 51.9826% ( 215) 00:16:56.356 1292.516 - 1298.584: 52.2489% ( 249) 00:16:56.356 1298.584 - 1304.652: 52.4852% ( 221) 00:16:56.356 1304.652 - 1310.720: 52.6906% ( 192) 00:16:56.356 1310.720 - 1316.788: 52.8702% ( 168) 00:16:56.356 1316.788 - 1322.856: 53.0638% ( 181) 00:16:56.356 1322.856 - 1328.924: 53.2926% ( 214) 00:16:56.356 1328.924 - 1334.993: 53.5108% ( 204) 00:16:56.356 1334.993 - 1341.061: 53.6947% ( 172) 00:16:56.356 1341.061 - 1347.129: 53.9193% ( 210) 00:16:56.356 1347.129 - 1353.197: 54.1289% ( 196) 00:16:56.356 1353.197 - 1359.265: 54.3075% ( 167) 00:16:56.356 1359.265 - 1365.333: 54.5342% ( 212) 00:16:56.356 1365.333 - 1371.401: 54.8079% ( 256) 00:16:56.356 1371.401 - 1377.470: 54.9951% ( 175) 00:16:56.356 1377.470 - 1383.538: 55.1576% ( 152) 00:16:56.356 1383.538 - 1389.606: 55.3619% ( 191) 00:16:56.356 1389.606 - 1395.674: 55.5426% ( 169) 00:16:56.356 1395.674 - 1401.742: 55.7255% ( 171) 00:16:56.356 1401.742 - 1407.810: 55.9180% ( 180) 00:16:56.356 1407.810 - 1413.879: 56.0955% ( 166) 00:16:56.356 1413.879 - 1419.947: 56.2815% ( 174) 00:16:56.356 1419.947 - 1426.015: 56.4751% ( 181) 00:16:56.356 1426.015 - 1432.083: 56.6836% ( 195) 00:16:56.356 1432.083 - 1438.151: 56.8622% ( 167) 00:16:56.356 1438.151 - 1444.219: 57.0996% ( 222) 00:16:56.356 1444.219 - 1450.287: 57.2568% ( 147) 00:16:56.356 1450.287 - 1456.356: 57.4632% ( 193) 00:16:56.356 1456.356 - 1462.424: 57.7124% ( 233) 00:16:56.356 1462.424 - 1468.492: 57.9498% ( 222) 00:16:56.356 1468.492 - 1474.560: 58.1091% ( 149) 00:16:56.356 1474.560 - 1480.628: 58.2973% ( 176) 00:16:56.356 1480.628 - 1486.696: 58.5390% ( 226) 00:16:56.356 1486.696 - 1492.764: 58.7443% ( 192) 00:16:56.356 1492.764 - 1498.833: 58.9593% ( 201) 00:16:56.356 1498.833 - 1504.901: 59.1443% ( 173) 00:16:56.356 1504.901 - 1510.969: 59.3656% ( 207) 00:16:56.356 1510.969 - 1517.037: 59.5506% ( 173) 00:16:56.356 1517.037 - 1523.105: 59.7624% ( 198) 00:16:56.356 1523.105 - 1529.173: 59.9784% ( 202) 00:16:56.356 1529.173 - 1535.241: 60.1891% ( 197) 00:16:56.356 1535.241 - 1541.310: 60.3997% ( 197) 00:16:56.356 1541.310 - 1547.378: 60.6115% ( 198) 00:16:56.356 1547.378 - 1553.446: 60.8446% ( 218) 00:16:56.356 1553.446 - 1565.582: 61.2210% ( 352) 00:16:56.356 1565.582 - 1577.719: 61.6937% ( 442) 00:16:56.356 1577.719 - 1589.855: 62.0776% ( 359) 00:16:56.356 1589.855 - 1601.991: 62.4444% ( 343) 00:16:56.356 1601.991 - 1614.127: 62.8529% ( 382) 00:16:56.356 1614.127 - 1626.264: 63.2518% ( 373) 00:16:56.356 1626.264 - 1638.400: 63.6389% ( 362) 00:16:56.356 1638.400 - 1650.536: 64.0142% ( 351) 00:16:56.356 1650.536 - 1662.673: 64.3190% ( 285) 00:16:56.356 1662.673 - 1674.809: 64.6452% ( 305) 00:16:56.356 1674.809 - 1686.945: 64.9425% ( 278) 00:16:56.356 1686.945 - 1699.081: 65.2280% ( 267) 00:16:56.356 1699.081 - 1711.218: 65.5050% ( 259) 00:16:56.356 1711.218 - 1723.354: 65.6889% ( 172) 00:16:56.356 1723.354 - 1735.490: 65.8846% ( 183) 00:16:56.356 1735.490 - 1747.627: 66.1017% ( 203) 00:16:56.356 1747.627 - 1759.763: 66.3369% ( 220) 00:16:56.356 1759.763 - 1771.899: 66.4899% ( 143) 00:16:56.356 1771.899 - 1784.036: 66.6513% ( 151) 00:16:56.356 1784.036 - 1796.172: 66.8256% ( 163) 00:16:56.356 1796.172 - 1808.308: 67.0053% ( 168) 00:16:56.356 1808.308 - 1820.444: 67.1561% ( 141) 00:16:56.356 1820.444 - 1832.581: 67.3015% ( 136) 00:16:56.356 1832.581 - 1844.717: 67.4448% ( 134) 00:16:56.356 1844.717 - 1856.853: 67.5785% ( 125) 00:16:56.356 1856.853 - 1868.990: 67.7100% ( 123) 00:16:56.356 1868.990 - 1881.126: 67.8480% ( 129) 00:16:56.356 1881.126 - 1893.262: 67.9913% ( 134) 00:16:56.356 1893.262 - 1905.399: 68.1538% ( 152) 00:16:56.356 1905.399 - 1917.535: 68.2928% ( 130) 00:16:56.356 1917.535 - 1929.671: 68.4201% ( 119) 00:16:56.356 1929.671 - 1941.807: 68.5623% ( 133) 00:16:56.356 1941.807 - 1953.944: 68.7249% ( 152) 00:16:56.356 1953.944 - 1966.080: 68.8660% ( 132) 00:16:56.356 1966.080 - 1978.216: 69.0500% ( 172) 00:16:56.356 1978.216 - 1990.353: 69.2029% ( 143) 00:16:56.356 1990.353 - 2002.489: 69.3847% ( 170) 00:16:56.356 2002.489 - 2014.625: 69.5472% ( 152) 00:16:56.356 2014.625 - 2026.761: 69.6895% ( 133) 00:16:56.356 2026.761 - 2038.898: 69.8413% ( 142) 00:16:56.356 2038.898 - 2051.034: 70.0199% ( 167) 00:16:56.356 2051.034 - 2063.170: 70.1878% ( 157) 00:16:56.356 2063.170 - 2075.307: 70.3493% ( 151) 00:16:56.356 2075.307 - 2087.443: 70.5043% ( 145) 00:16:56.356 2087.443 - 2099.579: 70.6711% ( 156) 00:16:56.356 2099.579 - 2111.716: 70.8070% ( 127) 00:16:56.356 2111.716 - 2123.852: 70.9952% ( 176) 00:16:56.356 2123.852 - 2135.988: 71.1738% ( 167) 00:16:56.356 2135.988 - 2148.124: 71.3459% ( 161) 00:16:56.356 2148.124 - 2160.261: 71.5673% ( 207) 00:16:56.356 2160.261 - 2172.397: 71.8068% ( 224) 00:16:56.356 2172.397 - 2184.533: 72.0410% ( 219) 00:16:56.356 2184.533 - 2196.670: 72.2314% ( 178) 00:16:56.356 2196.670 - 2208.806: 72.4859% ( 238) 00:16:56.356 2208.806 - 2220.942: 72.7714% ( 267) 00:16:56.356 2220.942 - 2233.079: 73.0740% ( 283) 00:16:56.356 2233.079 - 2245.215: 73.3307% ( 240) 00:16:56.356 2245.215 - 2257.351: 73.6066% ( 258) 00:16:56.356 2257.351 - 2269.487: 73.9392% ( 311) 00:16:56.356 2269.487 - 2281.624: 74.2685% ( 308) 00:16:56.356 2281.624 - 2293.760: 74.5840% ( 295) 00:16:56.356 2293.760 - 2305.896: 74.8631% ( 261) 00:16:56.356 2305.896 - 2318.033: 75.2107% ( 325) 00:16:56.356 2318.033 - 2330.169: 75.5069% ( 277) 00:16:56.356 2330.169 - 2342.305: 75.8085% ( 282) 00:16:56.356 2342.305 - 2354.441: 76.0908% ( 264) 00:16:56.356 2354.441 - 2366.578: 76.3784% ( 269) 00:16:56.356 2366.578 - 2378.714: 76.6725% ( 275) 00:16:56.356 2378.714 - 2390.850: 76.9366% ( 247) 00:16:56.356 2390.850 - 2402.987: 77.2457% ( 289) 00:16:56.356 2402.987 - 2415.123: 77.5558% ( 290) 00:16:56.356 2415.123 - 2427.259: 77.8146% ( 242) 00:16:56.356 2427.259 - 2439.396: 78.1493% ( 313) 00:16:56.356 2439.396 - 2451.532: 78.4520% ( 283) 00:16:56.356 2451.532 - 2463.668: 78.7706% ( 298) 00:16:56.356 2463.668 - 2475.804: 78.9984% ( 213) 00:16:56.356 2475.804 - 2487.941: 79.2551% ( 240) 00:16:56.356 2487.941 - 2500.077: 79.4839% ( 214) 00:16:56.356 2500.077 - 2512.213: 79.7491% ( 248) 00:16:56.356 2512.213 - 2524.350: 80.0603% ( 291) 00:16:56.356 2524.350 - 2536.486: 80.2656% ( 192) 00:16:56.356 2536.486 - 2548.622: 80.5180% ( 236) 00:16:56.356 2548.622 - 2560.759: 80.7447% ( 212) 00:16:56.356 2560.759 - 2572.895: 80.9885% ( 228) 00:16:56.356 2572.895 - 2585.031: 81.2484% ( 243) 00:16:56.356 2585.031 - 2597.167: 81.5115% ( 246) 00:16:56.356 2597.167 - 2609.304: 81.8280% ( 296) 00:16:56.356 2609.304 - 2621.440: 82.1338% ( 286) 00:16:56.356 2621.440 - 2633.576: 82.4675% ( 312) 00:16:56.356 2633.576 - 2645.713: 82.8033% ( 314) 00:16:56.356 2645.713 - 2657.849: 83.1262% ( 302) 00:16:56.356 2657.849 - 2669.985: 83.4652% ( 317) 00:16:56.356 2669.985 - 2682.121: 83.8331% ( 344) 00:16:56.356 2682.121 - 2694.258: 84.1379% ( 285) 00:16:56.357 2694.258 - 2706.394: 84.4993% ( 338) 00:16:56.357 2706.394 - 2718.530: 84.8704% ( 347) 00:16:56.357 2718.530 - 2730.667: 85.1570% ( 268) 00:16:56.357 2730.667 - 2742.803: 85.3890% ( 217) 00:16:56.357 2742.803 - 2754.939: 85.7056% ( 296) 00:16:56.357 2754.939 - 2767.076: 86.0050% ( 280) 00:16:56.357 2767.076 - 2779.212: 86.2927% ( 269) 00:16:56.357 2779.212 - 2791.348: 86.5825% ( 271) 00:16:56.357 2791.348 - 2803.484: 86.8605% ( 260) 00:16:56.357 2803.484 - 2815.621: 87.1054% ( 229) 00:16:56.357 2815.621 - 2827.757: 87.3428% ( 222) 00:16:56.357 2827.757 - 2839.893: 87.6690% ( 305) 00:16:56.357 2839.893 - 2852.030: 87.9192% ( 234) 00:16:56.357 2852.030 - 2864.166: 88.1844% ( 248) 00:16:56.357 2864.166 - 2876.302: 88.4753% ( 272) 00:16:56.357 2876.302 - 2888.439: 88.7619% ( 268) 00:16:56.357 2888.439 - 2900.575: 89.0677% ( 286) 00:16:56.357 2900.575 - 2912.711: 89.4409% ( 349) 00:16:56.357 2912.711 - 2924.847: 89.8195% ( 354) 00:16:56.357 2924.847 - 2936.984: 90.1777% ( 335) 00:16:56.357 2936.984 - 2949.120: 90.5862% ( 382) 00:16:56.357 2949.120 - 2961.256: 91.0097% ( 396) 00:16:56.357 2961.256 - 2973.393: 91.4310% ( 394) 00:16:56.357 2973.393 - 2985.529: 91.9123% ( 450) 00:16:56.357 2985.529 - 2997.665: 92.4256% ( 480) 00:16:56.357 2997.665 - 3009.801: 92.8448% ( 392) 00:16:56.357 3009.801 - 3021.938: 93.2511% ( 380) 00:16:56.357 3021.938 - 3034.074: 93.6906% ( 411) 00:16:56.357 3034.074 - 3046.210: 94.0885% ( 372) 00:16:56.357 3046.210 - 3058.347: 94.5483% ( 430) 00:16:56.357 3058.347 - 3070.483: 95.0317% ( 452) 00:16:56.357 3070.483 - 3082.619: 95.4583% ( 399) 00:16:56.357 3082.619 - 3094.756: 95.8497% ( 366) 00:16:56.357 3094.756 - 3106.892: 96.1941% ( 322) 00:16:56.357 3106.892 - 3131.164: 96.9619% ( 718) 00:16:56.357 3131.164 - 3155.437: 97.6270% ( 622) 00:16:56.357 3155.437 - 3179.710: 98.2216% ( 556) 00:16:56.357 3179.710 - 3203.982: 98.6836% ( 432) 00:16:56.357 3203.982 - 3228.255: 99.0942% ( 384) 00:16:56.357 3228.255 - 3252.527: 99.3872% ( 274) 00:16:56.357 3252.527 - 3276.800: 99.5776% ( 178) 00:16:56.357 3276.800 - 3301.073: 99.6974% ( 112) 00:16:56.357 3301.073 - 3325.345: 99.8064% ( 102) 00:16:56.357 3325.345 - 3349.618: 99.8620% ( 52) 00:16:56.357 3349.618 - 3373.890: 99.9070% ( 42) 00:16:56.357 3373.890 - 3398.163: 99.9540% ( 44) 00:16:56.357 3398.163 - 3422.436: 99.9658% ( 11) 00:16:56.357 3422.436 - 3446.708: 99.9797% ( 13) 00:16:56.357 3446.708 - 3470.981: 99.9872% ( 7) 00:16:56.357 3470.981 - 3495.253: 99.9893% ( 2) 00:16:56.357 3495.253 - 3519.526: 99.9968% ( 7) 00:16:56.357 3519.526 - 3543.799: 99.9989% ( 2) 00:16:56.357 3543.799 - 3568.071: 100.0000% ( 1) 00:16:56.357 00:16:56.357 23:58:34 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:16:56.357 00:16:56.357 real 0m2.679s 00:16:56.357 user 0m2.184s 00:16:56.357 sys 0m0.358s 00:16:56.357 23:58:34 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.357 23:58:34 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:16:56.357 ************************************ 00:16:56.357 END TEST nvme_perf 00:16:56.357 ************************************ 00:16:56.357 23:58:34 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0 00:16:56.357 23:58:34 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:56.357 23:58:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.357 23:58:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.357 ************************************ 00:16:56.357 START TEST nvme_hello_world 00:16:56.357 ************************************ 00:16:56.357 23:58:34 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0 00:16:56.616 Initializing NVMe Controllers 00:16:56.616 Attached to 0000:84:00.0 00:16:56.616 Namespace ID: 1 size: 1000GB 00:16:56.616 Initialization complete. 00:16:56.616 INFO: using host memory buffer for IO 00:16:56.616 Hello world! 00:16:56.616 00:16:56.616 real 0m0.332s 00:16:56.616 user 0m0.083s 00:16:56.616 sys 0m0.185s 00:16:56.616 23:58:35 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.616 23:58:35 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:56.616 ************************************ 00:16:56.616 END TEST nvme_hello_world 00:16:56.616 ************************************ 00:16:56.616 23:58:35 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl 00:16:56.616 23:58:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:56.616 23:58:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.616 23:58:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.874 ************************************ 00:16:56.874 START TEST nvme_sgl 00:16:56.874 ************************************ 00:16:56.874 23:58:35 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl 00:16:57.132 NVMe Readv/Writev Request test 00:16:57.132 Attached to 0000:84:00.0 00:16:57.132 0000:84:00.0: build_io_request_0 test passed 00:16:57.132 0000:84:00.0: build_io_request_1 test passed 00:16:57.132 0000:84:00.0: build_io_request_2 test passed 00:16:57.132 0000:84:00.0: build_io_request_3 test passed 00:16:57.132 0000:84:00.0: build_io_request_4 test passed 00:16:57.132 0000:84:00.0: build_io_request_5 test passed 00:16:57.132 0000:84:00.0: build_io_request_6 test passed 00:16:57.132 0000:84:00.0: build_io_request_7 test passed 00:16:57.132 0000:84:00.0: build_io_request_8 test passed 00:16:57.132 0000:84:00.0: build_io_request_9 test passed 00:16:57.132 0000:84:00.0: build_io_request_10 test passed 00:16:57.132 0000:84:00.0: build_io_request_11 test passed 00:16:57.132 Cleaning up... 00:16:57.132 00:16:57.132 real 0m0.332s 00:16:57.132 user 0m0.144s 00:16:57.132 sys 0m0.128s 00:16:57.132 23:58:35 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.132 23:58:35 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:16:57.132 ************************************ 00:16:57.132 END TEST nvme_sgl 00:16:57.132 ************************************ 00:16:57.132 23:58:35 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp 00:16:57.132 23:58:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.132 23:58:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.132 23:58:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.132 ************************************ 00:16:57.132 START TEST nvme_e2edp 00:16:57.132 ************************************ 00:16:57.132 23:58:35 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp 00:16:57.391 NVMe Write/Read with End-to-End data protection test 00:16:57.391 Attached to 0000:84:00.0 00:16:57.391 Cleaning up... 00:16:57.391 00:16:57.391 real 0m0.222s 00:16:57.391 user 0m0.068s 00:16:57.391 sys 0m0.099s 00:16:57.391 23:58:35 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.391 23:58:35 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:16:57.391 ************************************ 00:16:57.391 END TEST nvme_e2edp 00:16:57.391 ************************************ 00:16:57.391 23:58:35 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve 00:16:57.391 23:58:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.391 23:58:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.391 23:58:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.391 ************************************ 00:16:57.391 START TEST nvme_reserve 00:16:57.391 ************************************ 00:16:57.391 23:58:35 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve 00:16:57.650 ===================================================== 00:16:57.650 NVMe Controller at PCI bus 132, device 0, function 0 00:16:57.650 ===================================================== 00:16:57.650 Reservations: Not Supported 00:16:57.650 Reservation test passed 00:16:57.650 00:16:57.650 real 0m0.323s 00:16:57.650 user 0m0.084s 00:16:57.650 sys 0m0.172s 00:16:57.650 23:58:36 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.650 23:58:36 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:16:57.650 ************************************ 00:16:57.650 END TEST nvme_reserve 00:16:57.650 ************************************ 00:16:57.650 23:58:36 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection 00:16:57.650 23:58:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.650 23:58:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.650 23:58:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.909 ************************************ 00:16:57.909 START TEST nvme_err_injection 00:16:57.909 ************************************ 00:16:57.909 23:58:36 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection 00:16:58.168 NVMe Error Injection test 00:16:58.168 Attached to 0000:84:00.0 00:16:58.168 0000:84:00.0: get features failed as expected 00:16:58.168 0000:84:00.0: get features successfully as expected 00:16:58.168 0000:84:00.0: read failed as expected 00:16:58.168 0000:84:00.0: read successfully as expected 00:16:58.168 Cleaning up... 00:16:58.168 00:16:58.168 real 0m0.368s 00:16:58.168 user 0m0.103s 00:16:58.168 sys 0m0.178s 00:16:58.168 23:58:36 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.168 23:58:36 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:16:58.168 ************************************ 00:16:58.168 END TEST nvme_err_injection 00:16:58.168 ************************************ 00:16:58.168 23:58:36 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:58.168 23:58:36 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:16:58.168 23:58:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.168 23:58:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.168 ************************************ 00:16:58.168 START TEST nvme_overhead 00:16:58.168 ************************************ 00:16:58.168 23:58:36 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:59.546 Initializing NVMe Controllers 00:16:59.546 Attached to 0000:84:00.0 00:16:59.546 Initialization complete. Launching workers. 00:16:59.546 submit (in ns) avg, min, max = 3913.8, 3508.9, 33320.0 00:16:59.546 complete (in ns) avg, min, max = 2496.3, 2137.8, 1141816.7 00:16:59.546 00:16:59.546 Submit histogram 00:16:59.546 ================ 00:16:59.546 Range in us Cumulative Count 00:16:59.546 3.508 - 3.532: 0.0398% ( 37) 00:16:59.546 3.532 - 3.556: 0.2109% ( 159) 00:16:59.546 3.556 - 3.579: 0.8233% ( 569) 00:16:59.546 3.579 - 3.603: 2.2288% ( 1306) 00:16:59.546 3.603 - 3.627: 6.0785% ( 3577) 00:16:59.546 3.627 - 3.650: 13.8229% ( 7196) 00:16:59.546 3.650 - 3.674: 23.8124% ( 9282) 00:16:59.546 3.674 - 3.698: 36.9229% ( 12182) 00:16:59.546 3.698 - 3.721: 49.3586% ( 11555) 00:16:59.546 3.721 - 3.745: 60.0045% ( 9892) 00:16:59.546 3.745 - 3.769: 67.5327% ( 6995) 00:16:59.546 3.769 - 3.793: 73.7392% ( 5767) 00:16:59.546 3.793 - 3.816: 78.3153% ( 4252) 00:16:59.546 3.816 - 3.840: 82.0950% ( 3512) 00:16:59.546 3.840 - 3.864: 84.8630% ( 2572) 00:16:59.546 3.864 - 3.887: 86.6172% ( 1630) 00:16:59.546 3.887 - 3.911: 88.3553% ( 1615) 00:16:59.546 3.911 - 3.935: 90.3700% ( 1872) 00:16:59.546 3.935 - 3.959: 92.0661% ( 1576) 00:16:59.546 3.959 - 3.982: 93.5707% ( 1398) 00:16:59.546 3.982 - 4.006: 94.8148% ( 1156) 00:16:59.546 4.006 - 4.030: 95.8210% ( 935) 00:16:59.546 4.030 - 4.053: 96.6594% ( 779) 00:16:59.546 4.053 - 4.077: 97.2330% ( 533) 00:16:59.546 4.077 - 4.101: 97.5764% ( 319) 00:16:59.546 4.101 - 4.124: 97.8077% ( 215) 00:16:59.546 4.124 - 4.148: 97.9573% ( 139) 00:16:59.546 4.148 - 4.172: 98.0564% ( 92) 00:16:59.546 4.172 - 4.196: 98.1360% ( 74) 00:16:59.546 4.196 - 4.219: 98.2156% ( 74) 00:16:59.546 4.219 - 4.243: 98.2662% ( 47) 00:16:59.546 4.243 - 4.267: 98.3007% ( 32) 00:16:59.546 4.267 - 4.290: 98.3297% ( 27) 00:16:59.546 4.290 - 4.314: 98.3523% ( 21) 00:16:59.546 4.314 - 4.338: 98.3738% ( 20) 00:16:59.546 4.338 - 4.361: 98.3900% ( 15) 00:16:59.546 4.361 - 4.385: 98.3997% ( 9) 00:16:59.546 4.385 - 4.409: 98.4094% ( 9) 00:16:59.546 4.409 - 4.433: 98.4147% ( 5) 00:16:59.546 4.433 - 4.456: 98.4158% ( 1) 00:16:59.546 4.456 - 4.480: 98.4190% ( 3) 00:16:59.546 4.480 - 4.504: 98.4287% ( 9) 00:16:59.546 4.551 - 4.575: 98.4298% ( 1) 00:16:59.546 4.575 - 4.599: 98.4309% ( 1) 00:16:59.546 4.599 - 4.622: 98.4330% ( 2) 00:16:59.546 4.646 - 4.670: 98.4341% ( 1) 00:16:59.546 4.670 - 4.693: 98.4352% ( 1) 00:16:59.546 4.717 - 4.741: 98.4363% ( 1) 00:16:59.547 4.764 - 4.788: 98.4406% ( 4) 00:16:59.547 4.788 - 4.812: 98.4459% ( 5) 00:16:59.547 4.836 - 4.859: 98.4470% ( 1) 00:16:59.547 4.859 - 4.883: 98.4492% ( 2) 00:16:59.547 4.930 - 4.954: 98.4502% ( 1) 00:16:59.547 5.049 - 5.073: 98.4524% ( 2) 00:16:59.547 5.144 - 5.167: 98.4535% ( 1) 00:16:59.547 5.239 - 5.262: 98.4546% ( 1) 00:16:59.547 5.262 - 5.286: 98.4556% ( 1) 00:16:59.547 5.333 - 5.357: 98.4567% ( 1) 00:16:59.547 5.381 - 5.404: 98.4578% ( 1) 00:16:59.547 5.973 - 5.997: 98.4589% ( 1) 00:16:59.547 6.116 - 6.163: 98.4621% ( 3) 00:16:59.547 6.163 - 6.210: 98.4642% ( 2) 00:16:59.547 6.210 - 6.258: 98.4707% ( 6) 00:16:59.547 6.258 - 6.305: 98.4728% ( 2) 00:16:59.547 6.305 - 6.353: 98.4782% ( 5) 00:16:59.547 6.400 - 6.447: 98.4879% ( 9) 00:16:59.547 6.447 - 6.495: 98.4987% ( 10) 00:16:59.547 6.495 - 6.542: 98.5073% ( 8) 00:16:59.547 6.542 - 6.590: 98.5213% ( 13) 00:16:59.547 6.590 - 6.637: 98.5353% ( 13) 00:16:59.547 6.637 - 6.684: 98.5471% ( 11) 00:16:59.547 6.684 - 6.732: 98.5708% ( 22) 00:16:59.547 6.732 - 6.779: 98.6009% ( 28) 00:16:59.547 6.779 - 6.827: 98.6192% ( 17) 00:16:59.547 6.827 - 6.874: 98.6440% ( 23) 00:16:59.547 6.874 - 6.921: 98.6698% ( 24) 00:16:59.547 6.921 - 6.969: 98.6838% ( 13) 00:16:59.547 6.969 - 7.016: 98.7042% ( 19) 00:16:59.547 7.016 - 7.064: 98.7193% ( 14) 00:16:59.547 7.064 - 7.111: 98.7322% ( 12) 00:16:59.547 7.111 - 7.159: 98.7462% ( 13) 00:16:59.547 7.159 - 7.206: 98.7516% ( 5) 00:16:59.547 7.206 - 7.253: 98.7580% ( 6) 00:16:59.547 7.253 - 7.301: 98.7656% ( 7) 00:16:59.547 7.301 - 7.348: 98.7785% ( 12) 00:16:59.547 7.348 - 7.396: 98.7860% ( 7) 00:16:59.547 7.396 - 7.443: 98.7871% ( 1) 00:16:59.547 7.443 - 7.490: 98.7914% ( 4) 00:16:59.547 7.490 - 7.538: 98.7936% ( 2) 00:16:59.547 7.538 - 7.585: 98.7979% ( 4) 00:16:59.547 7.585 - 7.633: 98.8022% ( 4) 00:16:59.547 7.633 - 7.680: 98.8076% ( 5) 00:16:59.547 7.727 - 7.775: 98.8119% ( 4) 00:16:59.547 7.775 - 7.822: 98.8194% ( 7) 00:16:59.547 7.822 - 7.870: 98.8237% ( 4) 00:16:59.547 7.870 - 7.917: 98.8269% ( 3) 00:16:59.547 7.917 - 7.964: 98.8291% ( 2) 00:16:59.547 7.964 - 8.012: 98.8366% ( 7) 00:16:59.547 8.012 - 8.059: 98.8377% ( 1) 00:16:59.547 8.059 - 8.107: 98.8398% ( 2) 00:16:59.547 8.107 - 8.154: 98.8409% ( 1) 00:16:59.547 8.154 - 8.201: 98.8431% ( 2) 00:16:59.547 8.201 - 8.249: 98.8463% ( 3) 00:16:59.547 8.249 - 8.296: 98.8517% ( 5) 00:16:59.547 8.296 - 8.344: 98.8528% ( 1) 00:16:59.547 8.344 - 8.391: 98.8549% ( 2) 00:16:59.547 8.439 - 8.486: 98.8560% ( 1) 00:16:59.547 8.486 - 8.533: 98.8581% ( 2) 00:16:59.547 8.581 - 8.628: 98.8592% ( 1) 00:16:59.547 8.628 - 8.676: 98.8614% ( 2) 00:16:59.547 8.676 - 8.723: 98.8624% ( 1) 00:16:59.547 8.865 - 8.913: 98.8635% ( 1) 00:16:59.547 8.913 - 8.960: 98.8667% ( 3) 00:16:59.547 9.055 - 9.102: 98.8678% ( 1) 00:16:59.547 9.150 - 9.197: 98.8689% ( 1) 00:16:59.547 9.244 - 9.292: 98.8700% ( 1) 00:16:59.547 9.434 - 9.481: 98.8710% ( 1) 00:16:59.547 9.481 - 9.529: 98.8721% ( 1) 00:16:59.547 9.529 - 9.576: 98.8732% ( 1) 00:16:59.547 9.624 - 9.671: 98.8743% ( 1) 00:16:59.547 9.766 - 9.813: 98.8754% ( 1) 00:16:59.547 9.813 - 9.861: 98.8764% ( 1) 00:16:59.547 9.908 - 9.956: 98.8775% ( 1) 00:16:59.547 9.956 - 10.003: 98.8786% ( 1) 00:16:59.547 10.098 - 10.145: 98.8807% ( 2) 00:16:59.547 10.193 - 10.240: 98.8818% ( 1) 00:16:59.547 10.524 - 10.572: 98.8829% ( 1) 00:16:59.547 10.619 - 10.667: 98.8840% ( 1) 00:16:59.547 10.667 - 10.714: 98.8850% ( 1) 00:16:59.547 10.761 - 10.809: 98.8861% ( 1) 00:16:59.547 10.856 - 10.904: 98.8872% ( 1) 00:16:59.547 11.046 - 11.093: 98.8915% ( 4) 00:16:59.547 11.378 - 11.425: 98.8936% ( 2) 00:16:59.547 11.425 - 11.473: 98.8958% ( 2) 00:16:59.547 11.473 - 11.520: 98.8980% ( 2) 00:16:59.547 11.520 - 11.567: 98.8990% ( 1) 00:16:59.547 11.567 - 11.615: 98.9001% ( 1) 00:16:59.547 11.615 - 11.662: 98.9023% ( 2) 00:16:59.547 11.662 - 11.710: 98.9044% ( 2) 00:16:59.547 11.710 - 11.757: 98.9076% ( 3) 00:16:59.547 11.757 - 11.804: 98.9087% ( 1) 00:16:59.547 11.852 - 11.899: 98.9109% ( 2) 00:16:59.547 11.899 - 11.947: 98.9130% ( 2) 00:16:59.547 11.947 - 11.994: 98.9152% ( 2) 00:16:59.547 11.994 - 12.041: 98.9184% ( 3) 00:16:59.547 12.041 - 12.089: 98.9206% ( 2) 00:16:59.547 12.089 - 12.136: 98.9227% ( 2) 00:16:59.547 12.136 - 12.231: 98.9313% ( 8) 00:16:59.547 12.231 - 12.326: 98.9388% ( 7) 00:16:59.547 12.326 - 12.421: 98.9442% ( 5) 00:16:59.547 12.421 - 12.516: 98.9528% ( 8) 00:16:59.547 12.516 - 12.610: 98.9582% ( 5) 00:16:59.547 12.610 - 12.705: 98.9625% ( 4) 00:16:59.547 12.705 - 12.800: 98.9722% ( 9) 00:16:59.547 12.800 - 12.895: 98.9776% ( 5) 00:16:59.547 12.895 - 12.990: 98.9830% ( 5) 00:16:59.547 12.990 - 13.084: 98.9884% ( 5) 00:16:59.547 13.084 - 13.179: 98.9959% ( 7) 00:16:59.547 13.179 - 13.274: 99.0013% ( 5) 00:16:59.547 13.274 - 13.369: 99.0045% ( 3) 00:16:59.547 13.369 - 13.464: 99.0077% ( 3) 00:16:59.547 13.464 - 13.559: 99.0088% ( 1) 00:16:59.547 13.559 - 13.653: 99.0120% ( 3) 00:16:59.547 13.653 - 13.748: 99.0185% ( 6) 00:16:59.547 13.748 - 13.843: 99.0271% ( 8) 00:16:59.547 13.843 - 13.938: 99.0336% ( 6) 00:16:59.547 13.938 - 14.033: 99.0432% ( 9) 00:16:59.547 14.033 - 14.127: 99.0562% ( 12) 00:16:59.547 14.127 - 14.222: 99.0626% ( 6) 00:16:59.547 14.222 - 14.317: 99.0701% ( 7) 00:16:59.547 14.317 - 14.412: 99.0777% ( 7) 00:16:59.547 14.412 - 14.507: 99.0863% ( 8) 00:16:59.547 14.507 - 14.601: 99.0992% ( 12) 00:16:59.547 14.601 - 14.696: 99.1078% ( 8) 00:16:59.547 14.696 - 14.791: 99.1100% ( 2) 00:16:59.547 14.791 - 14.886: 99.1164% ( 6) 00:16:59.547 14.886 - 14.981: 99.1207% ( 4) 00:16:59.547 14.981 - 15.076: 99.1240% ( 3) 00:16:59.547 15.076 - 15.170: 99.1261% ( 2) 00:16:59.547 15.170 - 15.265: 99.1272% ( 1) 00:16:59.547 15.265 - 15.360: 99.1283% ( 1) 00:16:59.547 15.644 - 15.739: 99.1293% ( 1) 00:16:59.547 16.024 - 16.119: 99.1304% ( 1) 00:16:59.547 16.782 - 16.877: 99.1347% ( 4) 00:16:59.548 16.877 - 16.972: 99.1390% ( 4) 00:16:59.548 16.972 - 17.067: 99.1519% ( 12) 00:16:59.548 17.067 - 17.161: 99.1595% ( 7) 00:16:59.548 17.161 - 17.256: 99.1939% ( 32) 00:16:59.548 17.256 - 17.351: 99.2327% ( 36) 00:16:59.548 17.351 - 17.446: 99.2553% ( 21) 00:16:59.548 17.446 - 17.541: 99.3015% ( 43) 00:16:59.548 17.541 - 17.636: 99.3478% ( 43) 00:16:59.548 17.636 - 17.730: 99.4156% ( 63) 00:16:59.548 17.730 - 17.825: 99.4823% ( 62) 00:16:59.548 17.825 - 17.920: 99.5351% ( 49) 00:16:59.548 17.920 - 18.015: 99.6050% ( 65) 00:16:59.548 18.015 - 18.110: 99.6448% ( 37) 00:16:59.548 18.110 - 18.204: 99.6954% ( 47) 00:16:59.548 18.204 - 18.299: 99.7471% ( 48) 00:16:59.548 18.299 - 18.394: 99.7955% ( 45) 00:16:59.548 18.394 - 18.489: 99.8461% ( 47) 00:16:59.548 18.489 - 18.584: 99.8730% ( 25) 00:16:59.548 18.584 - 18.679: 99.8988% ( 24) 00:16:59.548 18.679 - 18.773: 99.9161% ( 16) 00:16:59.548 18.773 - 18.868: 99.9257% ( 9) 00:16:59.548 18.868 - 18.963: 99.9408% ( 14) 00:16:59.548 18.963 - 19.058: 99.9494% ( 8) 00:16:59.548 19.058 - 19.153: 99.9505% ( 1) 00:16:59.548 19.153 - 19.247: 99.9516% ( 1) 00:16:59.548 19.247 - 19.342: 99.9526% ( 1) 00:16:59.548 19.532 - 19.627: 99.9537% ( 1) 00:16:59.548 19.627 - 19.721: 99.9548% ( 1) 00:16:59.548 19.721 - 19.816: 99.9559% ( 1) 00:16:59.548 19.911 - 20.006: 99.9570% ( 1) 00:16:59.548 20.101 - 20.196: 99.9580% ( 1) 00:16:59.548 20.480 - 20.575: 99.9591% ( 1) 00:16:59.548 20.575 - 20.670: 99.9602% ( 1) 00:16:59.548 20.764 - 20.859: 99.9623% ( 2) 00:16:59.548 20.859 - 20.954: 99.9634% ( 1) 00:16:59.548 20.954 - 21.049: 99.9645% ( 1) 00:16:59.548 21.144 - 21.239: 99.9656% ( 1) 00:16:59.548 21.333 - 21.428: 99.9677% ( 2) 00:16:59.548 21.428 - 21.523: 99.9688% ( 1) 00:16:59.548 21.523 - 21.618: 99.9709% ( 2) 00:16:59.548 21.618 - 21.713: 99.9731% ( 2) 00:16:59.548 21.713 - 21.807: 99.9742% ( 1) 00:16:59.548 22.376 - 22.471: 99.9752% ( 1) 00:16:59.548 22.850 - 22.945: 99.9763% ( 1) 00:16:59.548 24.273 - 24.462: 99.9774% ( 1) 00:16:59.548 24.652 - 24.841: 99.9785% ( 1) 00:16:59.548 25.410 - 25.600: 99.9806% ( 2) 00:16:59.548 25.600 - 25.790: 99.9817% ( 1) 00:16:59.548 25.790 - 25.979: 99.9828% ( 1) 00:16:59.548 25.979 - 26.169: 99.9839% ( 1) 00:16:59.548 26.169 - 26.359: 99.9849% ( 1) 00:16:59.548 27.117 - 27.307: 99.9860% ( 1) 00:16:59.548 27.876 - 28.065: 99.9871% ( 1) 00:16:59.548 28.065 - 28.255: 99.9892% ( 2) 00:16:59.548 28.255 - 28.444: 99.9914% ( 2) 00:16:59.548 28.444 - 28.634: 99.9935% ( 2) 00:16:59.548 28.634 - 28.824: 99.9946% ( 1) 00:16:59.548 29.013 - 29.203: 99.9957% ( 1) 00:16:59.548 29.393 - 29.582: 99.9978% ( 2) 00:16:59.548 32.237 - 32.427: 99.9989% ( 1) 00:16:59.548 33.185 - 33.375: 100.0000% ( 1) 00:16:59.548 00:16:59.548 Complete histogram 00:16:59.548 ================== 00:16:59.548 Range in us Cumulative Count 00:16:59.548 2.133 - 2.145: 0.0323% ( 30) 00:16:59.548 2.145 - 2.157: 0.3864% ( 329) 00:16:59.548 2.157 - 2.169: 1.0342% ( 602) 00:16:59.548 2.169 - 2.181: 1.6046% ( 530) 00:16:59.548 2.181 - 2.193: 2.4602% ( 795) 00:16:59.548 2.193 - 2.204: 4.3404% ( 1747) 00:16:59.548 2.204 - 2.216: 8.0307% ( 3429) 00:16:59.548 2.216 - 2.228: 11.9934% ( 3682) 00:16:59.548 2.228 - 2.240: 15.0617% ( 2851) 00:16:59.548 2.240 - 2.252: 18.7542% ( 3431) 00:16:59.548 2.252 - 2.264: 24.5776% ( 5411) 00:16:59.548 2.264 - 2.276: 32.1789% ( 7063) 00:16:59.548 2.276 - 2.287: 40.8134% ( 8023) 00:16:59.548 2.287 - 2.299: 48.9733% ( 7582) 00:16:59.548 2.299 - 2.311: 56.8017% ( 7274) 00:16:59.548 2.311 - 2.323: 65.4771% ( 8061) 00:16:59.548 2.323 - 2.335: 73.4260% ( 7386) 00:16:59.548 2.335 - 2.347: 79.8134% ( 5935) 00:16:59.548 2.347 - 2.359: 85.5066% ( 5290) 00:16:59.548 2.359 - 2.370: 89.4498% ( 3664) 00:16:59.548 2.370 - 2.382: 91.7164% ( 2106) 00:16:59.548 2.382 - 2.394: 93.0907% ( 1277) 00:16:59.548 2.394 - 2.406: 93.7752% ( 636) 00:16:59.548 2.406 - 2.418: 94.1981% ( 393) 00:16:59.548 2.418 - 2.430: 94.5737% ( 349) 00:16:59.548 2.430 - 2.441: 94.9267% ( 328) 00:16:59.548 2.441 - 2.453: 95.1333% ( 192) 00:16:59.548 2.453 - 2.465: 95.2980% ( 153) 00:16:59.548 2.465 - 2.477: 95.4207% ( 114) 00:16:59.548 2.477 - 2.489: 95.5262% ( 98) 00:16:59.548 2.489 - 2.501: 95.6693% ( 133) 00:16:59.548 2.501 - 2.513: 95.7920% ( 114) 00:16:59.548 2.513 - 2.524: 95.8813% ( 83) 00:16:59.548 2.524 - 2.536: 95.9685% ( 81) 00:16:59.548 2.536 - 2.548: 96.0471% ( 73) 00:16:59.548 2.548 - 2.560: 96.1041% ( 53) 00:16:59.548 2.560 - 2.572: 96.1687% ( 60) 00:16:59.548 2.572 - 2.584: 96.2160% ( 44) 00:16:59.548 2.584 - 2.596: 96.2526% ( 34) 00:16:59.548 2.596 - 2.607: 96.2806% ( 26) 00:16:59.548 2.607 - 2.619: 96.3010% ( 19) 00:16:59.548 2.619 - 2.631: 96.3193% ( 17) 00:16:59.548 2.631 - 2.643: 96.3398% ( 19) 00:16:59.548 2.643 - 2.655: 96.3559% ( 15) 00:16:59.548 2.655 - 2.667: 96.3882% ( 30) 00:16:59.548 2.667 - 2.679: 96.4130% ( 23) 00:16:59.548 2.679 - 2.690: 96.4259% ( 12) 00:16:59.548 2.690 - 2.702: 96.4366% ( 10) 00:16:59.548 2.702 - 2.714: 96.4517% ( 14) 00:16:59.548 2.714 - 2.726: 96.4625% ( 10) 00:16:59.548 2.726 - 2.738: 96.4711% ( 8) 00:16:59.548 2.738 - 2.750: 96.4926% ( 20) 00:16:59.548 2.750 - 2.761: 96.5131% ( 19) 00:16:59.548 2.761 - 2.773: 96.5324% ( 18) 00:16:59.548 2.773 - 2.785: 96.5540% ( 20) 00:16:59.548 2.785 - 2.797: 96.5787% ( 23) 00:16:59.548 2.797 - 2.809: 96.6228% ( 41) 00:16:59.548 2.809 - 2.821: 96.6820% ( 55) 00:16:59.548 2.821 - 2.833: 96.7455% ( 59) 00:16:59.548 2.833 - 2.844: 96.8348% ( 83) 00:16:59.548 2.844 - 2.856: 96.9478% ( 105) 00:16:59.548 2.856 - 2.868: 97.0824% ( 125) 00:16:59.548 2.868 - 2.880: 97.2277% ( 135) 00:16:59.548 2.880 - 2.892: 97.3837% ( 145) 00:16:59.548 2.892 - 2.904: 97.5451% ( 150) 00:16:59.548 2.904 - 2.916: 97.7130% ( 156) 00:16:59.548 2.916 - 2.927: 97.9003% ( 174) 00:16:59.548 2.927 - 2.939: 98.0693% ( 157) 00:16:59.548 2.939 - 2.951: 98.2049% ( 126) 00:16:59.548 2.951 - 2.963: 98.3136% ( 101) 00:16:59.549 2.963 - 2.975: 98.3738% ( 56) 00:16:59.549 2.975 - 2.987: 98.4158% ( 39) 00:16:59.549 2.987 - 2.999: 98.4373% ( 20) 00:16:59.549 2.999 - 3.010: 98.4406% ( 3) 00:16:59.549 3.010 - 3.022: 98.4427% ( 2) 00:16:59.549 3.022 - 3.034: 98.4459% ( 3) 00:16:59.549 3.034 - 3.058: 98.4492% ( 3) 00:16:59.549 3.058 - 3.081: 98.4535% ( 4) 00:16:59.549 3.081 - 3.105: 98.4546% ( 1) 00:16:59.549 3.105 - 3.129: 98.4567% ( 2) 00:16:59.549 3.200 - 3.224: 98.4589% ( 2) 00:16:59.549 3.224 - 3.247: 98.4599% ( 1) 00:16:59.549 3.247 - 3.271: 98.4610% ( 1) 00:16:59.549 3.342 - 3.366: 98.4621% ( 1) 00:16:59.549 3.840 - 3.864: 98.4632% ( 1) 00:16:59.549 4.077 - 4.101: 98.4642% ( 1) 00:16:59.549 4.456 - 4.480: 98.4653% ( 1) 00:16:59.549 4.527 - 4.551: 98.4664% ( 1) 00:16:59.549 4.622 - 4.646: 98.4685% ( 2) 00:16:59.549 4.646 - 4.670: 98.4696% ( 1) 00:16:59.549 4.670 - 4.693: 98.4718% ( 2) 00:16:59.549 4.717 - 4.741: 98.4728% ( 1) 00:16:59.549 4.764 - 4.788: 98.4750% ( 2) 00:16:59.549 4.788 - 4.812: 98.4761% ( 1) 00:16:59.549 4.836 - 4.859: 98.4847% ( 8) 00:16:59.549 4.859 - 4.883: 98.4922% ( 7) 00:16:59.549 4.883 - 4.907: 98.4998% ( 7) 00:16:59.549 4.907 - 4.930: 98.5094% ( 9) 00:16:59.549 4.930 - 4.954: 98.5191% ( 9) 00:16:59.549 4.954 - 4.978: 98.5288% ( 9) 00:16:59.549 4.978 - 5.001: 98.5363% ( 7) 00:16:59.549 5.001 - 5.025: 98.5493% ( 12) 00:16:59.549 5.025 - 5.049: 98.5665% ( 16) 00:16:59.549 5.049 - 5.073: 98.5772% ( 10) 00:16:59.549 5.073 - 5.096: 98.5858% ( 8) 00:16:59.549 5.096 - 5.120: 98.6063% ( 19) 00:16:59.549 5.120 - 5.144: 98.6235% ( 16) 00:16:59.549 5.144 - 5.167: 98.6300% ( 6) 00:16:59.549 5.167 - 5.191: 98.6375% ( 7) 00:16:59.549 5.191 - 5.215: 98.6450% ( 7) 00:16:59.549 5.215 - 5.239: 98.6558% ( 10) 00:16:59.549 5.239 - 5.262: 98.6612% ( 5) 00:16:59.549 5.262 - 5.286: 98.6687% ( 7) 00:16:59.549 5.286 - 5.310: 98.6763% ( 7) 00:16:59.549 5.310 - 5.333: 98.6849% ( 8) 00:16:59.549 5.333 - 5.357: 98.6870% ( 2) 00:16:59.549 5.357 - 5.381: 98.6935% ( 6) 00:16:59.549 5.381 - 5.404: 98.7032% ( 9) 00:16:59.549 5.404 - 5.428: 98.7107% ( 7) 00:16:59.549 5.428 - 5.452: 98.7161% ( 5) 00:16:59.549 5.452 - 5.476: 98.7204% ( 4) 00:16:59.549 5.476 - 5.499: 98.7301% ( 9) 00:16:59.549 5.499 - 5.523: 98.7333% ( 3) 00:16:59.549 5.523 - 5.547: 98.7387% ( 5) 00:16:59.549 5.547 - 5.570: 98.7419% ( 3) 00:16:59.549 5.570 - 5.594: 98.7441% ( 2) 00:16:59.549 5.594 - 5.618: 98.7462% ( 2) 00:16:59.549 5.641 - 5.665: 98.7484% ( 2) 00:16:59.549 5.665 - 5.689: 98.7505% ( 2) 00:16:59.549 5.689 - 5.713: 98.7570% ( 6) 00:16:59.549 5.713 - 5.736: 98.7591% ( 2) 00:16:59.549 5.760 - 5.784: 98.7623% ( 3) 00:16:59.549 5.784 - 5.807: 98.7645% ( 2) 00:16:59.549 5.807 - 5.831: 98.7667% ( 2) 00:16:59.549 5.831 - 5.855: 98.7688% ( 2) 00:16:59.549 5.879 - 5.902: 98.7720% ( 3) 00:16:59.549 5.902 - 5.926: 98.7763% ( 4) 00:16:59.549 5.926 - 5.950: 98.7774% ( 1) 00:16:59.549 5.950 - 5.973: 98.7785% ( 1) 00:16:59.549 5.973 - 5.997: 98.7796% ( 1) 00:16:59.549 5.997 - 6.021: 98.7806% ( 1) 00:16:59.549 6.021 - 6.044: 98.7839% ( 3) 00:16:59.549 6.044 - 6.068: 98.7860% ( 2) 00:16:59.549 6.116 - 6.163: 98.7893% ( 3) 00:16:59.549 6.163 - 6.210: 98.7968% ( 7) 00:16:59.549 6.210 - 6.258: 98.8000% ( 3) 00:16:59.549 6.258 - 6.305: 98.8032% ( 3) 00:16:59.549 6.305 - 6.353: 98.8076% ( 4) 00:16:59.549 6.353 - 6.400: 98.8086% ( 1) 00:16:59.549 6.400 - 6.447: 98.8108% ( 2) 00:16:59.549 6.447 - 6.495: 98.8129% ( 2) 00:16:59.549 6.542 - 6.590: 98.8162% ( 3) 00:16:59.549 6.590 - 6.637: 98.8215% ( 5) 00:16:59.549 6.637 - 6.684: 98.8226% ( 1) 00:16:59.549 6.684 - 6.732: 98.8248% ( 2) 00:16:59.549 6.779 - 6.827: 98.8280% ( 3) 00:16:59.549 6.827 - 6.874: 98.8312% ( 3) 00:16:59.549 6.874 - 6.921: 98.8345% ( 3) 00:16:59.549 6.921 - 6.969: 98.8355% ( 1) 00:16:59.549 6.969 - 7.016: 98.8388% ( 3) 00:16:59.549 7.064 - 7.111: 98.8409% ( 2) 00:16:59.549 7.111 - 7.159: 98.8431% ( 2) 00:16:59.549 7.159 - 7.206: 98.8452% ( 2) 00:16:59.549 7.253 - 7.301: 98.8463% ( 1) 00:16:59.549 7.301 - 7.348: 98.8484% ( 2) 00:16:59.549 7.443 - 7.490: 98.8495% ( 1) 00:16:59.549 7.490 - 7.538: 98.8506% ( 1) 00:16:59.549 7.538 - 7.585: 98.8517% ( 1) 00:16:59.549 7.585 - 7.633: 98.8528% ( 1) 00:16:59.549 8.107 - 8.154: 98.8538% ( 1) 00:16:59.549 8.581 - 8.628: 98.8549% ( 1) 00:16:59.549 10.193 - 10.240: 98.8560% ( 1) 00:16:59.549 10.287 - 10.335: 98.8571% ( 1) 00:16:59.549 10.524 - 10.572: 98.8581% ( 1) 00:16:59.549 11.141 - 11.188: 98.8603% ( 2) 00:16:59.549 11.188 - 11.236: 98.8614% ( 1) 00:16:59.549 11.473 - 11.520: 98.8624% ( 1) 00:16:59.549 11.567 - 11.615: 98.8667% ( 4) 00:16:59.549 11.710 - 11.757: 98.8678% ( 1) 00:16:59.549 11.804 - 11.852: 98.8710% ( 3) 00:16:59.549 11.994 - 12.041: 98.8721% ( 1) 00:16:59.549 12.041 - 12.089: 98.8732% ( 1) 00:16:59.549 12.089 - 12.136: 98.8743% ( 1) 00:16:59.549 12.136 - 12.231: 98.8764% ( 2) 00:16:59.549 12.231 - 12.326: 98.8797% ( 3) 00:16:59.549 12.326 - 12.421: 98.8872% ( 7) 00:16:59.549 12.421 - 12.516: 98.8926% ( 5) 00:16:59.549 12.516 - 12.610: 98.8990% ( 6) 00:16:59.549 12.610 - 12.705: 98.9033% ( 4) 00:16:59.549 12.705 - 12.800: 98.9109% ( 7) 00:16:59.549 12.800 - 12.895: 98.9130% ( 2) 00:16:59.549 12.895 - 12.990: 98.9162% ( 3) 00:16:59.549 12.990 - 13.084: 98.9227% ( 6) 00:16:59.549 13.084 - 13.179: 98.9302% ( 7) 00:16:59.549 13.179 - 13.274: 98.9345% ( 4) 00:16:59.549 13.274 - 13.369: 98.9367% ( 2) 00:16:59.549 13.464 - 13.559: 98.9399% ( 3) 00:16:59.549 13.653 - 13.748: 98.9421% ( 2) 00:16:59.549 15.170 - 15.265: 98.9432% ( 1) 00:16:59.549 15.360 - 15.455: 98.9453% ( 2) 00:16:59.549 15.455 - 15.550: 98.9475% ( 2) 00:16:59.549 15.550 - 15.644: 98.9550% ( 7) 00:16:59.549 15.644 - 15.739: 98.9873% ( 30) 00:16:59.549 15.739 - 15.834: 99.0120% ( 23) 00:16:59.549 15.834 - 15.929: 99.0411% ( 27) 00:16:59.549 15.929 - 16.024: 99.0831% ( 39) 00:16:59.549 16.024 - 16.119: 99.1347% ( 48) 00:16:59.550 16.119 - 16.213: 99.1961% ( 57) 00:16:59.550 16.213 - 16.308: 99.3026% ( 99) 00:16:59.550 16.308 - 16.403: 99.3661% ( 59) 00:16:59.550 16.403 - 16.498: 99.4285% ( 58) 00:16:59.550 16.498 - 16.593: 99.4974% ( 64) 00:16:59.550 16.593 - 16.687: 99.5501% ( 49) 00:16:59.550 16.687 - 16.782: 99.6136% ( 59) 00:16:59.550 16.782 - 16.877: 99.7159% ( 95) 00:16:59.550 16.877 - 16.972: 99.7783% ( 58) 00:16:59.550 16.972 - 17.067: 99.8472% ( 64) 00:16:59.550 17.067 - 17.161: 99.8859% ( 36) 00:16:59.550 17.161 - 17.256: 99.9010% ( 14) 00:16:59.550 17.256 - 17.351: 99.9344% ( 31) 00:16:59.550 17.351 - 17.446: 99.9419% ( 7) 00:16:59.550 17.446 - 17.541: 99.9613% ( 18) 00:16:59.550 17.541 - 17.636: 99.9666% ( 5) 00:16:59.550 17.636 - 17.730: 99.9688% ( 2) 00:16:59.550 17.730 - 17.825: 99.9709% ( 2) 00:16:59.550 17.825 - 17.920: 99.9731% ( 2) 00:16:59.550 17.920 - 18.015: 99.9742% ( 1) 00:16:59.550 18.110 - 18.204: 99.9752% ( 1) 00:16:59.550 18.773 - 18.868: 99.9763% ( 1) 00:16:59.550 19.058 - 19.153: 99.9774% ( 1) 00:16:59.550 19.153 - 19.247: 99.9785% ( 1) 00:16:59.550 19.247 - 19.342: 99.9796% ( 1) 00:16:59.550 19.342 - 19.437: 99.9806% ( 1) 00:16:59.550 19.437 - 19.532: 99.9817% ( 1) 00:16:59.550 19.532 - 19.627: 99.9839% ( 2) 00:16:59.550 19.627 - 19.721: 99.9849% ( 1) 00:16:59.550 19.911 - 20.006: 99.9871% ( 2) 00:16:59.550 21.049 - 21.144: 99.9882% ( 1) 00:16:59.550 21.239 - 21.333: 99.9892% ( 1) 00:16:59.550 22.092 - 22.187: 99.9903% ( 1) 00:16:59.550 23.893 - 23.988: 99.9914% ( 1) 00:16:59.550 24.652 - 24.841: 99.9925% ( 1) 00:16:59.550 25.221 - 25.410: 99.9935% ( 1) 00:16:59.550 25.790 - 25.979: 99.9957% ( 2) 00:16:59.550 28.065 - 28.255: 99.9968% ( 1) 00:16:59.550 28.255 - 28.444: 99.9978% ( 1) 00:16:59.550 93.677 - 94.056: 99.9989% ( 1) 00:16:59.550 1140.812 - 1146.880: 100.0000% ( 1) 00:16:59.550 00:16:59.550 00:16:59.550 real 0m1.271s 00:16:59.550 user 0m1.068s 00:16:59.550 sys 0m0.144s 00:16:59.550 23:58:37 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.550 23:58:37 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:16:59.550 ************************************ 00:16:59.550 END TEST nvme_overhead 00:16:59.550 ************************************ 00:16:59.550 23:58:37 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0 00:16:59.550 23:58:37 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:16:59.550 23:58:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.550 23:58:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:59.550 ************************************ 00:16:59.550 START TEST nvme_arbitration 00:16:59.550 ************************************ 00:16:59.550 23:58:37 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0 00:17:02.838 Initializing NVMe Controllers 00:17:02.838 Attached to 0000:84:00.0 00:17:02.838 Associating INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) with lcore 0 00:17:02.838 Associating INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) with lcore 1 00:17:02.838 Associating INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) with lcore 2 00:17:02.838 Associating INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) with lcore 3 00:17:02.838 /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:02.838 /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:17:02.838 Initialization complete. Launching workers. 00:17:02.838 Starting thread on core 1 with urgent priority queue 00:17:02.838 Starting thread on core 2 with urgent priority queue 00:17:02.838 Starting thread on core 3 with urgent priority queue 00:17:02.838 Starting thread on core 0 with urgent priority queue 00:17:02.838 INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) core 0: 4715.00 IO/s 21.21 secs/100000 ios 00:17:02.838 INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) core 1: 4759.67 IO/s 21.01 secs/100000 ios 00:17:02.838 INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) core 2: 4346.67 IO/s 23.01 secs/100000 ios 00:17:02.838 INTEL SSDPE2KX010T8 (BTLJ724400Z71P0FGN ) core 3: 4258.33 IO/s 23.48 secs/100000 ios 00:17:02.838 ======================================================== 00:17:02.838 00:17:02.838 00:17:02.838 real 0m3.394s 00:17:02.838 user 0m9.213s 00:17:02.838 sys 0m0.187s 00:17:02.838 23:58:41 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.838 23:58:41 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:17:02.838 ************************************ 00:17:02.838 END TEST nvme_arbitration 00:17:02.838 ************************************ 00:17:02.838 23:58:41 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 00:17:02.838 23:58:41 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:02.838 23:58:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.838 23:58:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.096 ************************************ 00:17:03.096 START TEST nvme_single_aen 00:17:03.096 ************************************ 00:17:03.096 23:58:41 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 00:17:03.096 [2024-12-09 23:58:41.590757] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 505343) is not found. Dropping the request. 00:17:03.097 [2024-12-09 23:58:41.590841] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 505343) is not found. Dropping the request. 00:17:03.097 [2024-12-09 23:58:41.590861] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 505343) is not found. Dropping the request. 00:17:03.097 [2024-12-09 23:58:41.590876] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 505343) is not found. Dropping the request. 00:17:06.382 Asynchronous Event Request test 00:17:06.382 Attached to 0000:84:00.0 00:17:06.382 Reset controller to setup AER completions for this process 00:17:06.382 Registering asynchronous event callbacks... 00:17:06.382 Getting orig temperature thresholds of all controllers 00:17:06.382 0000:84:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:06.382 Setting all controllers temperature threshold low to trigger AER 00:17:06.382 Waiting for all controllers temperature threshold to be set lower 00:17:06.382 Waiting for all controllers to trigger AER and reset threshold 00:17:06.382 0000:84:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:06.382 aer_cb - Resetting Temp Threshold for device: 0000:84:00.0 00:17:06.382 0000:84:00.0: Current Temperature: 313 Kelvin (40 Celsius) 00:17:06.382 Cleaning up... 00:17:06.382 00:17:06.382 real 0m3.124s 00:17:06.382 user 0m2.608s 00:17:06.382 sys 0m0.456s 00:17:06.382 23:58:44 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.382 23:58:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 ************************************ 00:17:06.382 END TEST nvme_single_aen 00:17:06.382 ************************************ 00:17:06.382 23:58:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:17:06.382 23:58:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:06.382 23:58:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.382 23:58:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 ************************************ 00:17:06.382 START TEST nvme_doorbell_aers 00:17:06.382 ************************************ 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:17:06.382 23:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:06.640 [2024-12-09 23:58:44.988696] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 508406) is not found. Dropping the request. 00:17:16.619 Executing: test_write_invalid_db 00:17:16.619 Waiting for AER completion... 00:17:16.619 Failure: test_write_invalid_db 00:17:16.619 00:17:16.619 Executing: test_invalid_db_write_overflow_sq 00:17:16.619 Waiting for AER completion... 00:17:16.619 Failure: test_invalid_db_write_overflow_sq 00:17:16.619 00:17:16.619 Executing: test_invalid_db_write_overflow_cq 00:17:16.619 Waiting for AER completion... 00:17:16.619 Failure: test_invalid_db_write_overflow_cq 00:17:16.619 00:17:16.619 00:17:16.619 real 0m10.099s 00:17:16.619 user 0m7.549s 00:17:16.619 sys 0m2.446s 00:17:16.619 23:58:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.619 23:58:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:17:16.619 ************************************ 00:17:16.619 END TEST nvme_doorbell_aers 00:17:16.619 ************************************ 00:17:16.619 23:58:54 nvme -- nvme/nvme.sh@97 -- # uname 00:17:16.619 23:58:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:17:16.619 23:58:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 00:17:16.619 23:58:54 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:17:16.619 23:58:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.619 23:58:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.619 ************************************ 00:17:16.619 START TEST nvme_multi_aen 00:17:16.619 ************************************ 00:17:16.619 23:58:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 00:17:16.619 [2024-12-09 23:58:55.081117] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 508406) is not found. Dropping the request. 00:17:16.619 [2024-12-09 23:58:55.081184] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 508406) is not found. Dropping the request. 00:17:16.619 [2024-12-09 23:58:55.081204] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 508406) is not found. Dropping the request. 00:17:16.619 Child process pid: 509955 00:17:21.887 [Child] Asynchronous Event Request test 00:17:21.887 [Child] Attached to 0000:84:00.0 00:17:21.887 [Child] Registering asynchronous event callbacks... 00:17:21.887 [Child] Getting orig temperature thresholds of all controllers 00:17:21.887 [Child] 0000:84:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:21.887 [Child] Waiting for all controllers to trigger AER and reset threshold 00:17:21.887 [Child] 0000:84:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:21.887 [Child] 0000:84:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:21.887 [Child] 0000:84:00.0: Current Temperature: 312 Kelvin (39 Celsius) 00:17:21.887 [Child] Cleaning up... 00:17:21.887 [Child] 0000:84:00.0: Current Temperature: 312 Kelvin (39 Celsius) 00:17:21.887 Asynchronous Event Request test 00:17:21.887 Attached to 0000:84:00.0 00:17:21.887 Reset controller to setup AER completions for this process 00:17:21.887 Registering asynchronous event callbacks... 00:17:21.887 Getting orig temperature thresholds of all controllers 00:17:21.887 0000:84:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:21.887 Setting all controllers temperature threshold low to trigger AER 00:17:21.887 Waiting for all controllers temperature threshold to be set lower 00:17:21.887 Waiting for all controllers to trigger AER and reset threshold 00:17:21.887 0000:84:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:21.887 aer_cb - Resetting Temp Threshold for device: 0000:84:00.0 00:17:21.887 0000:84:00.0: Current Temperature: 312 Kelvin (39 Celsius) 00:17:21.887 Cleaning up... 00:17:21.887 00:17:21.887 real 0m4.791s 00:17:21.887 user 0m3.916s 00:17:21.887 sys 0m1.720s 00:17:21.887 23:58:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.887 23:58:59 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:17:21.887 ************************************ 00:17:21.887 END TEST nvme_multi_aen 00:17:21.887 ************************************ 00:17:21.887 23:58:59 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000 00:17:21.887 23:58:59 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:21.887 23:58:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.887 23:58:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:21.887 ************************************ 00:17:21.888 START TEST nvme_startup 00:17:21.888 ************************************ 00:17:21.888 23:58:59 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000 00:17:21.888 Initializing NVMe Controllers 00:17:21.888 Attached to 0000:84:00.0 00:17:21.888 Initialization complete. 00:17:21.888 Time used:211050.547 (us). 00:17:21.888 00:17:21.888 real 0m0.245s 00:17:21.888 user 0m0.060s 00:17:21.888 sys 0m0.121s 00:17:21.888 23:58:59 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.888 23:58:59 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:17:21.888 ************************************ 00:17:21.888 END TEST nvme_startup 00:17:21.888 ************************************ 00:17:21.888 23:58:59 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:17:21.888 23:58:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.888 23:58:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.888 23:58:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:21.888 ************************************ 00:17:21.888 START TEST nvme_multi_secondary 00:17:21.888 ************************************ 00:17:21.888 23:58:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:17:21.888 23:58:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=510536 00:17:21.888 23:58:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:17:21.888 23:58:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=510537 00:17:21.888 23:58:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:17:21.888 23:58:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:17:25.179 Initializing NVMe Controllers 00:17:25.179 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:25.179 Associating PCIE (0000:84:00.0) NSID 1 with lcore 2 00:17:25.179 Initialization complete. Launching workers. 00:17:25.179 ======================================================== 00:17:25.179 Latency(us) 00:17:25.179 Device Information : IOPS MiB/s Average min max 00:17:25.179 PCIE (0000:84:00.0) NSID 1 from core 2: 37937.77 148.19 421.38 31.72 7316.50 00:17:25.179 ======================================================== 00:17:25.179 Total : 37937.77 148.19 421.38 31.72 7316.50 00:17:25.179 00:17:25.179 23:59:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 510536 00:17:25.179 Initializing NVMe Controllers 00:17:25.179 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:25.179 Associating PCIE (0000:84:00.0) NSID 1 with lcore 1 00:17:25.179 Initialization complete. Launching workers. 00:17:25.179 ======================================================== 00:17:25.179 Latency(us) 00:17:25.179 Device Information : IOPS MiB/s Average min max 00:17:25.179 PCIE (0000:84:00.0) NSID 1 from core 1: 82182.53 321.03 194.43 47.42 3134.77 00:17:25.179 ======================================================== 00:17:25.179 Total : 82182.53 321.03 194.43 47.42 3134.77 00:17:25.179 00:17:27.085 Initializing NVMe Controllers 00:17:27.085 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:27.085 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:17:27.085 Initialization complete. Launching workers. 00:17:27.085 ======================================================== 00:17:27.085 Latency(us) 00:17:27.085 Device Information : IOPS MiB/s Average min max 00:17:27.085 PCIE (0000:84:00.0) NSID 1 from core 0: 85104.98 332.44 187.74 24.46 3018.00 00:17:27.085 ======================================================== 00:17:27.085 Total : 85104.98 332.44 187.74 24.46 3018.00 00:17:27.085 00:17:27.085 23:59:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 510537 00:17:27.086 23:59:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=511192 00:17:27.086 23:59:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:17:27.086 23:59:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=511193 00:17:27.086 23:59:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:17:27.086 23:59:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:17:30.372 Initializing NVMe Controllers 00:17:30.372 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:30.372 Associating PCIE (0000:84:00.0) NSID 1 with lcore 1 00:17:30.372 Initialization complete. Launching workers. 00:17:30.372 ======================================================== 00:17:30.372 Latency(us) 00:17:30.372 Device Information : IOPS MiB/s Average min max 00:17:30.372 PCIE (0000:84:00.0) NSID 1 from core 1: 83078.34 324.52 192.34 32.55 3693.97 00:17:30.372 ======================================================== 00:17:30.372 Total : 83078.34 324.52 192.34 32.55 3693.97 00:17:30.372 00:17:30.630 Initializing NVMe Controllers 00:17:30.630 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:30.630 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:17:30.630 Initialization complete. Launching workers. 00:17:30.630 ======================================================== 00:17:30.630 Latency(us) 00:17:30.630 Device Information : IOPS MiB/s Average min max 00:17:30.630 PCIE (0000:84:00.0) NSID 1 from core 0: 83051.92 324.42 192.40 24.90 4115.18 00:17:30.630 ======================================================== 00:17:30.630 Total : 83051.92 324.42 192.40 24.90 4115.18 00:17:30.630 00:17:32.662 Initializing NVMe Controllers 00:17:32.662 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:32.662 Associating PCIE (0000:84:00.0) NSID 1 with lcore 2 00:17:32.662 Initialization complete. Launching workers. 00:17:32.662 ======================================================== 00:17:32.662 Latency(us) 00:17:32.662 Device Information : IOPS MiB/s Average min max 00:17:32.662 PCIE (0000:84:00.0) NSID 1 from core 2: 41116.80 160.61 388.59 37.85 6280.78 00:17:32.662 ======================================================== 00:17:32.662 Total : 41116.80 160.61 388.59 37.85 6280.78 00:17:32.662 00:17:32.662 23:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 511192 00:17:32.662 23:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 511193 00:17:32.662 00:17:32.662 real 0m10.803s 00:17:32.662 user 0m18.441s 00:17:32.662 sys 0m0.901s 00:17:32.662 23:59:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.662 23:59:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:17:32.662 ************************************ 00:17:32.662 END TEST nvme_multi_secondary 00:17:32.662 ************************************ 00:17:32.662 23:59:10 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:17:32.662 23:59:10 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:17:32.662 23:59:10 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/504681 ]] 00:17:32.662 23:59:10 nvme -- common/autotest_common.sh@1094 -- # kill 504681 00:17:32.662 23:59:10 nvme -- common/autotest_common.sh@1095 -- # wait 504681 00:17:32.662 [2024-12-09 23:59:10.719621] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 509954) is not found. Dropping the request. 00:17:32.662 [2024-12-09 23:59:10.719673] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 509954) is not found. Dropping the request. 00:17:32.662 [2024-12-09 23:59:10.719692] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 509954) is not found. Dropping the request. 00:17:32.662 [2024-12-09 23:59:10.719708] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 509954) is not found. Dropping the request. 00:17:33.304 [2024-12-09 23:59:11.698088] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:17:33.622 23:59:12 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:17:33.622 23:59:12 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:17:33.622 23:59:12 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:17:33.622 23:59:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:33.622 23:59:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.622 23:59:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:33.959 ************************************ 00:17:33.959 START TEST bdev_nvme_reset_stuck_adm_cmd 00:17:33.959 ************************************ 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:17:33.959 * Looking for test storage... 00:17:33.959 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.959 --rc genhtml_branch_coverage=1 00:17:33.959 --rc genhtml_function_coverage=1 00:17:33.959 --rc genhtml_legend=1 00:17:33.959 --rc geninfo_all_blocks=1 00:17:33.959 --rc geninfo_unexecuted_blocks=1 00:17:33.959 00:17:33.959 ' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.959 --rc genhtml_branch_coverage=1 00:17:33.959 --rc genhtml_function_coverage=1 00:17:33.959 --rc genhtml_legend=1 00:17:33.959 --rc geninfo_all_blocks=1 00:17:33.959 --rc geninfo_unexecuted_blocks=1 00:17:33.959 00:17:33.959 ' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.959 --rc genhtml_branch_coverage=1 00:17:33.959 --rc genhtml_function_coverage=1 00:17:33.959 --rc genhtml_legend=1 00:17:33.959 --rc geninfo_all_blocks=1 00:17:33.959 --rc geninfo_unexecuted_blocks=1 00:17:33.959 00:17:33.959 ' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.959 --rc genhtml_branch_coverage=1 00:17:33.959 --rc genhtml_function_coverage=1 00:17:33.959 --rc genhtml_legend=1 00:17:33.959 --rc geninfo_all_blocks=1 00:17:33.959 --rc geninfo_unexecuted_blocks=1 00:17:33.959 00:17:33.959 ' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:84:00.0 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:84:00.0 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:84:00.0 ']' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=512089 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0xF 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 512089 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 512089 ']' 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.959 23:59:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:33.959 [2024-12-09 23:59:12.438839] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:17:33.959 [2024-12-09 23:59:12.438940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512089 ] 00:17:34.236 [2024-12-09 23:59:12.555204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.236 [2024-12-09 23:59:12.661713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.236 [2024-12-09 23:59:12.661802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.236 [2024-12-09 23:59:12.661866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.236 [2024-12-09 23:59:12.661869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.495 [2024-12-09 23:59:12.900852] 'OCF_Core' volume operations registered 00:17:34.495 [2024-12-09 23:59:12.900905] 'OCF_Cache' volume operations registered 00:17:34.495 [2024-12-09 23:59:12.905350] 'OCF Composite' volume operations registered 00:17:34.495 [2024-12-09 23:59:12.909817] 'SPDK_block_device' volume operations registered 00:17:34.753 23:59:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.753 23:59:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:17:34.753 23:59:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:84:00.0 00:17:34.753 23:59:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.753 23:59:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:38.040 nvme0n1 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_59MRe.txt 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:38.040 true 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733785155 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=512498 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:38.040 23:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:17:39.417 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:39.417 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.417 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:39.417 [2024-12-09 23:59:17.925562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:84:00.0, 0] resetting controller 00:17:39.417 [2024-12-09 23:59:17.925733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:39.417 [2024-12-09 23:59:17.925778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:39.417 [2024-12-09 23:59:17.925796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.417 [2024-12-09 23:59:17.926808] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:84:00.0, 0] Resetting controller successful. 00:17:39.417 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.417 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 512498 00:17:39.417 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 512498 00:17:39.417 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 512498 00:17:39.676 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:17:39.676 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:17:39.676 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.676 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.676 23:59:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_59MRe.txt 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:17:41.053 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_59MRe.txt 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 512089 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 512089 ']' 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 512089 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 512089 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 512089' 00:17:41.054 killing process with pid 512089 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 512089 00:17:41.054 23:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 512089 00:17:41.622 23:59:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:17:41.622 23:59:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:17:41.622 00:17:41.622 real 0m7.883s 00:17:41.622 user 0m28.844s 00:17:41.622 sys 0m1.128s 00:17:41.622 23:59:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.622 23:59:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:41.622 ************************************ 00:17:41.622 END TEST bdev_nvme_reset_stuck_adm_cmd 00:17:41.622 ************************************ 00:17:41.622 23:59:20 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:17:41.622 23:59:20 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:17:41.622 23:59:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:41.622 23:59:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.622 23:59:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:41.622 ************************************ 00:17:41.622 START TEST nvme_fio 00:17:41.622 ************************************ 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:41.622 23:59:20 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:84:00.0') 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:41.622 23:59:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:17:46.896 23:59:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:46.896 23:59:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:17:50.187 23:59:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:17:50.187 23:59:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.84.00.0' --bs=4096 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.84.00.0' --bs=4096 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:50.187 23:59:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.84.00.0' --bs=4096 00:17:50.446 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:50.446 fio-3.35 00:17:50.446 Starting 1 thread 00:17:57.014 00:17:57.014 test: (groupid=0, jobs=1): err= 0: pid=514439: Mon Dec 9 23:59:35 2024 00:17:57.014 read: IOPS=53.0k, BW=207MiB/s (217MB/s)(414MiB/2001msec) 00:17:57.014 slat (nsec): min=3595, max=34987, avg=4737.32, stdev=1608.09 00:17:57.014 clat (usec): min=208, max=1800, avg=1198.59, stdev=29.17 00:17:57.014 lat (usec): min=213, max=1804, avg=1203.33, stdev=29.21 00:17:57.014 clat percentiles (usec): 00:17:57.014 | 1.00th=[ 1139], 5.00th=[ 1156], 10.00th=[ 1172], 20.00th=[ 1188], 00:17:57.014 | 30.00th=[ 1188], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1205], 00:17:57.014 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1221], 95.00th=[ 1237], 00:17:57.014 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1385], 00:17:57.014 | 99.99th=[ 1483] 00:17:57.014 bw ( KiB/s): min=209464, max=213552, per=100.00%, avg=212090.67, stdev=2279.57, samples=3 00:17:57.014 iops : min=52366, max=53388, avg=53022.67, stdev=569.89, samples=3 00:17:57.014 write: IOPS=52.9k, BW=207MiB/s (217MB/s)(413MiB/2001msec); 0 zone resets 00:17:57.014 slat (nsec): min=3709, max=31211, avg=4963.12, stdev=1642.86 00:17:57.014 clat (usec): min=190, max=1455, avg=1198.26, stdev=25.49 00:17:57.014 lat (usec): min=195, max=1460, avg=1203.23, stdev=25.54 00:17:57.014 clat percentiles (usec): 00:17:57.014 | 1.00th=[ 1139], 5.00th=[ 1156], 10.00th=[ 1172], 20.00th=[ 1188], 00:17:57.014 | 30.00th=[ 1188], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1205], 00:17:57.014 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1221], 95.00th=[ 1237], 00:17:57.014 | 99.00th=[ 1254], 99.50th=[ 1254], 99.90th=[ 1287], 99.95th=[ 1319], 00:17:57.014 | 99.99th=[ 1401] 00:17:57.014 bw ( KiB/s): min=209528, max=212032, per=99.77%, avg=211045.33, stdev=1333.68, samples=3 00:17:57.014 iops : min=52382, max=53008, avg=52761.33, stdev=333.42, samples=3 00:17:57.014 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:17:57.014 lat (msec) : 2=99.93% 00:17:57.014 cpu : usr=99.30%, sys=0.05%, ctx=3, majf=0, minf=6 00:17:57.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:57.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.014 issued rwts: total=105993,105818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.014 00:17:57.014 Run status group 0 (all jobs): 00:17:57.014 READ: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=414MiB (434MB), run=2001-2001msec 00:17:57.014 WRITE: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=413MiB (433MB), run=2001-2001msec 00:17:57.014 23:59:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:57.014 23:59:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:17:57.014 00:17:57.014 real 0m15.102s 00:17:57.014 user 0m11.788s 00:17:57.014 sys 0m1.577s 00:17:57.014 23:59:35 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.014 23:59:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:17:57.014 ************************************ 00:17:57.014 END TEST nvme_fio 00:17:57.014 ************************************ 00:17:57.014 00:17:57.014 real 1m26.904s 00:17:57.014 user 3m34.293s 00:17:57.014 sys 0m14.271s 00:17:57.014 23:59:35 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.014 23:59:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:57.014 ************************************ 00:17:57.014 END TEST nvme 00:17:57.014 ************************************ 00:17:57.014 23:59:35 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:17:57.014 23:59:35 -- spdk/autotest.sh@217 -- # run_test nvme_scc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh 00:17:57.014 23:59:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.014 23:59:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.014 23:59:35 -- common/autotest_common.sh@10 -- # set +x 00:17:57.014 ************************************ 00:17:57.014 START TEST nvme_scc 00:17:57.014 ************************************ 00:17:57.014 23:59:35 nvme_scc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh 00:17:57.014 * Looking for test storage... 00:17:57.014 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:17:57.014 23:59:35 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:57.014 23:59:35 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:57.014 23:59:35 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:57.014 23:59:35 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@345 -- # : 1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.014 23:59:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@368 -- # return 0 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:57.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.015 --rc genhtml_branch_coverage=1 00:17:57.015 --rc genhtml_function_coverage=1 00:17:57.015 --rc genhtml_legend=1 00:17:57.015 --rc geninfo_all_blocks=1 00:17:57.015 --rc geninfo_unexecuted_blocks=1 00:17:57.015 00:17:57.015 ' 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:57.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.015 --rc genhtml_branch_coverage=1 00:17:57.015 --rc genhtml_function_coverage=1 00:17:57.015 --rc genhtml_legend=1 00:17:57.015 --rc geninfo_all_blocks=1 00:17:57.015 --rc geninfo_unexecuted_blocks=1 00:17:57.015 00:17:57.015 ' 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:57.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.015 --rc genhtml_branch_coverage=1 00:17:57.015 --rc genhtml_function_coverage=1 00:17:57.015 --rc genhtml_legend=1 00:17:57.015 --rc geninfo_all_blocks=1 00:17:57.015 --rc geninfo_unexecuted_blocks=1 00:17:57.015 00:17:57.015 ' 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:57.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.015 --rc genhtml_branch_coverage=1 00:17:57.015 --rc genhtml_function_coverage=1 00:17:57.015 --rc genhtml_legend=1 00:17:57.015 --rc geninfo_all_blocks=1 00:17:57.015 --rc geninfo_unexecuted_blocks=1 00:17:57.015 00:17:57.015 ' 00:17:57.015 23:59:35 nvme_scc -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.015 23:59:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.015 23:59:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.015 23:59:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.015 23:59:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.015 23:59:35 nvme_scc -- paths/export.sh@5 -- # export PATH 00:17:57.015 23:59:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:17:57.015 23:59:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:17:57.015 23:59:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:17:57.015 23:59:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:17:57.015 23:59:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:17:57.015 23:59:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ ............................... == QEMU ]] 00:17:57.015 23:59:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:17:57.015 00:17:57.015 real 0m0.181s 00:17:57.015 user 0m0.121s 00:17:57.015 sys 0m0.069s 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.015 23:59:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:17:57.015 ************************************ 00:17:57.015 END TEST nvme_scc 00:17:57.015 ************************************ 00:17:57.015 23:59:35 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:17:57.015 23:59:35 -- spdk/autotest.sh@222 -- # [[ 1 -eq 1 ]] 00:17:57.015 23:59:35 -- spdk/autotest.sh@223 -- # run_test nvme_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh 00:17:57.015 23:59:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.015 23:59:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.015 23:59:35 -- common/autotest_common.sh@10 -- # set +x 00:17:57.015 ************************************ 00:17:57.015 START TEST nvme_cuse 00:17:57.015 ************************************ 00:17:57.015 23:59:35 nvme_cuse -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh 00:17:57.274 * Looking for test storage... 00:17:57.274 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1711 -- # lcov --version 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@344 -- # case "$op" in 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@345 -- # : 1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@365 -- # decimal 1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@353 -- # local d=1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@355 -- # echo 1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@366 -- # decimal 2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@353 -- # local d=2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@355 -- # echo 2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.274 23:59:35 nvme_cuse -- scripts/common.sh@368 -- # return 0 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:57.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.274 --rc genhtml_branch_coverage=1 00:17:57.274 --rc genhtml_function_coverage=1 00:17:57.274 --rc genhtml_legend=1 00:17:57.274 --rc geninfo_all_blocks=1 00:17:57.274 --rc geninfo_unexecuted_blocks=1 00:17:57.274 00:17:57.274 ' 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:57.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.274 --rc genhtml_branch_coverage=1 00:17:57.274 --rc genhtml_function_coverage=1 00:17:57.274 --rc genhtml_legend=1 00:17:57.274 --rc geninfo_all_blocks=1 00:17:57.274 --rc geninfo_unexecuted_blocks=1 00:17:57.274 00:17:57.274 ' 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:57.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.274 --rc genhtml_branch_coverage=1 00:17:57.274 --rc genhtml_function_coverage=1 00:17:57.274 --rc genhtml_legend=1 00:17:57.274 --rc geninfo_all_blocks=1 00:17:57.274 --rc geninfo_unexecuted_blocks=1 00:17:57.274 00:17:57.274 ' 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:57.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.274 --rc genhtml_branch_coverage=1 00:17:57.274 --rc genhtml_function_coverage=1 00:17:57.274 --rc genhtml_legend=1 00:17:57.274 --rc geninfo_all_blocks=1 00:17:57.274 --rc geninfo_unexecuted_blocks=1 00:17:57.274 00:17:57.274 ' 00:17:57.274 23:59:35 nvme_cuse -- cuse/nvme_cuse.sh@11 -- # uname 00:17:57.274 23:59:35 nvme_cuse -- cuse/nvme_cuse.sh@11 -- # [[ Linux != \L\i\n\u\x ]] 00:17:57.274 23:59:35 nvme_cuse -- cuse/nvme_cuse.sh@16 -- # modprobe cuse 00:17:57.274 23:59:35 nvme_cuse -- cuse/nvme_cuse.sh@17 -- # run_test nvme_cuse_app /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.274 23:59:35 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:17:57.274 ************************************ 00:17:57.274 START TEST nvme_cuse_app 00:17:57.274 ************************************ 00:17:57.274 23:59:35 nvme_cuse.nvme_cuse_app -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse 00:17:57.274 00:17:57.274 00:17:57.274 CUnit - A unit testing framework for C - Version 2.1-3 00:17:57.274 http://cunit.sourceforge.net/ 00:17:57.274 00:17:57.274 00:17:57.274 Suite: nvme_cuse 00:17:58.215 Test: test_cuse_update ...passed 00:17:58.215 00:17:58.215 Run Summary: Type Total Ran Passed Failed Inactive 00:17:58.215 suites 1 1 n/a 0 0 00:17:58.215 tests 1 1 1 0 0 00:17:58.215 asserts 28 28 28 0 n/a 00:17:58.215 00:17:58.215 Elapsed time = 0.053 seconds 00:17:58.215 00:17:58.215 real 0m1.016s 00:17:58.215 user 0m0.009s 00:17:58.215 sys 0m0.052s 00:17:58.215 23:59:36 nvme_cuse.nvme_cuse_app -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.215 23:59:36 nvme_cuse.nvme_cuse_app -- common/autotest_common.sh@10 -- # set +x 00:17:58.215 ************************************ 00:17:58.215 END TEST nvme_cuse_app 00:17:58.215 ************************************ 00:17:58.215 23:59:36 nvme_cuse -- cuse/nvme_cuse.sh@18 -- # run_test nvme_cuse_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh 00:17:58.215 23:59:36 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:58.215 23:59:36 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.215 23:59:36 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:17:58.474 ************************************ 00:17:58.474 START TEST nvme_cuse_rpc 00:17:58.474 ************************************ 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh 00:17:58.474 * Looking for test storage... 00:17:58.474 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.474 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@345 -- # : 1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@353 -- # local d=1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@355 -- # echo 1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@353 -- # local d=2 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@355 -- # echo 2 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- scripts/common.sh@368 -- # return 0 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.475 --rc genhtml_branch_coverage=1 00:17:58.475 --rc genhtml_function_coverage=1 00:17:58.475 --rc genhtml_legend=1 00:17:58.475 --rc geninfo_all_blocks=1 00:17:58.475 --rc geninfo_unexecuted_blocks=1 00:17:58.475 00:17:58.475 ' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.475 --rc genhtml_branch_coverage=1 00:17:58.475 --rc genhtml_function_coverage=1 00:17:58.475 --rc genhtml_legend=1 00:17:58.475 --rc geninfo_all_blocks=1 00:17:58.475 --rc geninfo_unexecuted_blocks=1 00:17:58.475 00:17:58.475 ' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.475 --rc genhtml_branch_coverage=1 00:17:58.475 --rc genhtml_function_coverage=1 00:17:58.475 --rc genhtml_legend=1 00:17:58.475 --rc geninfo_all_blocks=1 00:17:58.475 --rc geninfo_unexecuted_blocks=1 00:17:58.475 00:17:58.475 ' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.475 --rc genhtml_branch_coverage=1 00:17:58.475 --rc genhtml_function_coverage=1 00:17:58.475 --rc genhtml_legend=1 00:17:58.475 --rc geninfo_all_blocks=1 00:17:58.475 --rc geninfo_unexecuted_blocks=1 00:17:58.475 00:17:58.475 ' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@13 -- # get_first_nvme_bdf 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:17:58.475 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:84:00.0 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1512 -- # echo 0000:84:00.0 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@13 -- # bdf=0000:84:00.0 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@14 -- # ctrlr_base=/dev/spdk/nvme 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@17 -- # spdk_tgt_pid=515182 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@18 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@20 -- # waitforlisten 515182 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@835 -- # '[' -z 515182 ']' 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.734 23:59:36 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.734 [2024-12-09 23:59:37.047928] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:17:58.734 [2024-12-09 23:59:37.048006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515182 ] 00:17:58.734 [2024-12-09 23:59:37.145014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:58.734 [2024-12-09 23:59:37.240818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.734 [2024-12-09 23:59:37.240823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.993 [2024-12-09 23:59:37.482627] 'OCF_Core' volume operations registered 00:17:58.993 [2024-12-09 23:59:37.482677] 'OCF_Cache' volume operations registered 00:17:58.993 [2024-12-09 23:59:37.487158] 'OCF Composite' volume operations registered 00:17:58.993 [2024-12-09 23:59:37.491653] 'SPDK_block_device' volume operations registered 00:17:59.251 23:59:37 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.251 23:59:37 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:59.251 23:59:37 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:18:02.555 Nvme0n1 00:18:02.555 23:59:40 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:02.555 [2024-12-09 23:59:41.063972] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:18:02.555 [2024-12-09 23:59:41.064012] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:18:02.555 [2024-12-09 23:59:41.064184] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:18:02.555 [2024-12-09 23:59:41.064226] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:18:02.813 23:59:41 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@25 -- # sleep 5 00:18:08.080 23:59:46 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@27 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:18:08.080 23:59:46 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:18:08.080 [ 00:18:08.080 { 00:18:08.080 "name": "Nvme0n1", 00:18:08.080 "aliases": [ 00:18:08.080 "32dc427f-697a-41f6-88a0-705ff60e4a31" 00:18:08.080 ], 00:18:08.080 "product_name": "NVMe disk", 00:18:08.080 "block_size": 512, 00:18:08.080 "num_blocks": 1953525168, 00:18:08.080 "uuid": "32dc427f-697a-41f6-88a0-705ff60e4a31", 00:18:08.080 "numa_id": 1, 00:18:08.080 "assigned_rate_limits": { 00:18:08.080 "rw_ios_per_sec": 0, 00:18:08.080 "rw_mbytes_per_sec": 0, 00:18:08.080 "r_mbytes_per_sec": 0, 00:18:08.080 "w_mbytes_per_sec": 0 00:18:08.080 }, 00:18:08.080 "claimed": false, 00:18:08.080 "zoned": false, 00:18:08.080 "supported_io_types": { 00:18:08.080 "read": true, 00:18:08.080 "write": true, 00:18:08.080 "unmap": true, 00:18:08.080 "flush": true, 00:18:08.080 "reset": true, 00:18:08.080 "nvme_admin": true, 00:18:08.080 "nvme_io": true, 00:18:08.080 "nvme_io_md": false, 00:18:08.080 "write_zeroes": true, 00:18:08.080 "zcopy": false, 00:18:08.080 "get_zone_info": false, 00:18:08.080 "zone_management": false, 00:18:08.080 "zone_append": false, 00:18:08.080 "compare": false, 00:18:08.080 "compare_and_write": false, 00:18:08.080 "abort": true, 00:18:08.080 "seek_hole": false, 00:18:08.080 "seek_data": false, 00:18:08.080 "copy": false, 00:18:08.080 "nvme_iov_md": false 00:18:08.080 }, 00:18:08.080 "driver_specific": { 00:18:08.080 "nvme": [ 00:18:08.080 { 00:18:08.080 "pci_address": "0000:84:00.0", 00:18:08.080 "trid": { 00:18:08.080 "trtype": "PCIe", 00:18:08.080 "traddr": "0000:84:00.0" 00:18:08.080 }, 00:18:08.080 "cuse_device": "spdk/nvme0n1", 00:18:08.080 "ctrlr_data": { 00:18:08.080 "cntlid": 0, 00:18:08.080 "vendor_id": "0x8086", 00:18:08.080 "model_number": "INTEL SSDPE2KX010T8", 00:18:08.080 "serial_number": "BTLJ724400Z71P0FGN", 00:18:08.080 "firmware_revision": "VDV10184", 00:18:08.080 "oacs": { 00:18:08.080 "security": 0, 00:18:08.080 "format": 1, 00:18:08.080 "firmware": 1, 00:18:08.080 "ns_manage": 1 00:18:08.080 }, 00:18:08.080 "multi_ctrlr": false, 00:18:08.080 "ana_reporting": false 00:18:08.080 }, 00:18:08.080 "vs": { 00:18:08.080 "nvme_version": "1.2" 00:18:08.080 }, 00:18:08.080 "ns_data": { 00:18:08.080 "id": 1, 00:18:08.080 "can_share": false 00:18:08.080 } 00:18:08.080 } 00:18:08.080 ], 00:18:08.080 "mp_policy": "active_passive" 00:18:08.080 } 00:18:08.080 } 00:18:08.080 ] 00:18:08.080 23:59:46 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:18:08.338 [ 00:18:08.338 { 00:18:08.338 "name": "Nvme0", 00:18:08.338 "ctrlrs": [ 00:18:08.338 { 00:18:08.338 "state": "enabled", 00:18:08.338 "cuse_device": "spdk/nvme0", 00:18:08.338 "trid": { 00:18:08.338 "trtype": "PCIe", 00:18:08.338 "traddr": "0000:84:00.0" 00:18:08.338 }, 00:18:08.338 "cntlid": 0, 00:18:08.338 "host": { 00:18:08.338 "nqn": "nqn.2014-08.org.nvmexpress:uuid:58609471-33da-44c1-b580-9ff947956558", 00:18:08.338 "addr": "", 00:18:08.338 "svcid": "" 00:18:08.338 }, 00:18:08.338 "numa_id": 1 00:18:08.338 } 00:18:08.338 ] 00:18:08.338 } 00:18:08.338 ] 00:18:08.338 23:59:46 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0 00:18:08.905 23:59:47 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@35 -- # sleep 1 00:18:09.840 [2024-12-09 23:59:48.075745] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:18:10.099 23:59:48 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@36 -- # '[' -c /dev/spdk/nvme0 ']' 00:18:10.099 23:59:48 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0 00:18:10.357 [2024-12-09 23:59:48.735100] nvme_cuse.c:1471:spdk_nvme_cuse_unregister: *ERROR*: Cannot find associated CUSE device 00:18:10.357 request: 00:18:10.357 { 00:18:10.357 "name": "Nvme0", 00:18:10.357 "method": "bdev_nvme_cuse_unregister", 00:18:10.357 "req_id": 1 00:18:10.357 } 00:18:10.357 Got JSON-RPC error response 00:18:10.357 response: 00:18:10.357 { 00:18:10.357 "code": -19, 00:18:10.357 "message": "No such device" 00:18:10.357 } 00:18:10.357 23:59:48 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:10.615 [2024-12-09 23:59:49.114570] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:18:10.615 [2024-12-09 23:59:49.114602] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:18:10.615 [2024-12-09 23:59:49.114720] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:18:10.615 [2024-12-09 23:59:49.114783] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:18:10.615 23:59:49 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@44 -- # sleep 1 00:18:11.994 23:59:50 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@46 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:18:11.994 23:59:50 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:11.994 [2024-12-09 23:59:50.496795] bdev_nvme_cuse_rpc.c: 57:rpc_nvme_cuse_register: *ERROR*: Failed to register CUSE devices: File exists 00:18:11.994 request: 00:18:11.994 { 00:18:11.994 "name": "Nvme0", 00:18:11.994 "method": "bdev_nvme_cuse_register", 00:18:11.994 "req_id": 1 00:18:11.994 } 00:18:11.994 Got JSON-RPC error response 00:18:11.994 response: 00:18:11.994 { 00:18:11.994 "code": -17, 00:18:11.994 "message": "File exists" 00:18:11.994 } 00:18:12.254 23:59:50 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@52 -- # sleep 1 00:18:13.191 23:59:51 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@54 -- # '[' -c /dev/spdk/nvme1 ']' 00:18:13.191 23:59:51 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:15.101 [2024-12-09 23:59:53.120903] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@60 -- # trap - SIGINT SIGTERM EXIT 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- cuse/nvme_cuse_rpc.sh@61 -- # killprocess 515182 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@954 -- # '[' -z 515182 ']' 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@958 -- # kill -0 515182 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@959 -- # uname 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515182 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515182' 00:18:15.101 killing process with pid 515182 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@973 -- # kill 515182 00:18:15.101 23:59:53 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@978 -- # wait 515182 00:18:16.039 00:18:16.039 real 0m17.467s 00:18:16.039 user 0m36.372s 00:18:16.039 sys 0m1.459s 00:18:16.039 23:59:54 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.039 23:59:54 nvme_cuse.nvme_cuse_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.039 ************************************ 00:18:16.039 END TEST nvme_cuse_rpc 00:18:16.039 ************************************ 00:18:16.039 23:59:54 nvme_cuse -- cuse/nvme_cuse.sh@19 -- # run_test nvme_cli_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh 00:18:16.039 23:59:54 nvme_cuse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:16.039 23:59:54 nvme_cuse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.039 23:59:54 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:16.039 ************************************ 00:18:16.039 START TEST nvme_cli_cuse 00:18:16.039 ************************************ 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh 00:18:16.039 * Looking for test storage... 00:18:16.039 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1711 -- # lcov --version 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@344 -- # case "$op" in 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@345 -- # : 1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@365 -- # decimal 1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@353 -- # local d=1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@355 -- # echo 1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@366 -- # decimal 2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@353 -- # local d=2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@355 -- # echo 2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@368 -- # return 0 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.039 --rc genhtml_branch_coverage=1 00:18:16.039 --rc genhtml_function_coverage=1 00:18:16.039 --rc genhtml_legend=1 00:18:16.039 --rc geninfo_all_blocks=1 00:18:16.039 --rc geninfo_unexecuted_blocks=1 00:18:16.039 00:18:16.039 ' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.039 --rc genhtml_branch_coverage=1 00:18:16.039 --rc genhtml_function_coverage=1 00:18:16.039 --rc genhtml_legend=1 00:18:16.039 --rc geninfo_all_blocks=1 00:18:16.039 --rc geninfo_unexecuted_blocks=1 00:18:16.039 00:18:16.039 ' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.039 --rc genhtml_branch_coverage=1 00:18:16.039 --rc genhtml_function_coverage=1 00:18:16.039 --rc genhtml_legend=1 00:18:16.039 --rc geninfo_all_blocks=1 00:18:16.039 --rc geninfo_unexecuted_blocks=1 00:18:16.039 00:18:16.039 ' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.039 --rc genhtml_branch_coverage=1 00:18:16.039 --rc genhtml_function_coverage=1 00:18:16.039 --rc genhtml_legend=1 00:18:16.039 --rc geninfo_all_blocks=1 00:18:16.039 --rc geninfo_unexecuted_blocks=1 00:18:16.039 00:18:16.039 ' 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- paths/export.sh@5 -- # export PATH 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@10 -- # ctrls=() 00:18:16.039 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@10 -- # declare -A ctrls 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@11 -- # nvmes=() 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@11 -- # declare -A nvmes 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@12 -- # bdfs=() 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@12 -- # declare -A bdfs 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@14 -- # nvme_name= 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@10 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@11 -- # mkdir /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@13 -- # KERNEL_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@14 -- # CUSE_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@16 -- # NVME_CMD=/usr/local/src/nvme-cli/nvme 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@17 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:16.040 23:59:54 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@19 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:18:17.416 Waiting for block devices as requested 00:18:17.416 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:18:17.416 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:17.416 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:17.675 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:17.675 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:17.675 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:17.675 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:17.934 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:17.934 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:17.934 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:17.934 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:18.192 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:18.192 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:18.192 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:18.454 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:18.454 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:18.454 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@20 -- # scan_nvme_ctrls 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@49 -- # pci=0000:84:00.0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@50 -- # pci_can_use 0000:84:00.0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@18 -- # local i 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@21 -- # [[ =~ 0000:84:00.0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- scripts/common.sh@27 -- # return 0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@18 -- # shift 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n BTLJ724400Z71P0FGN ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ724400Z71P0FGN "' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ724400Z71P0FGN ' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX010T8 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX010T8 "' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX010T8 ' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n VDV10184 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x1e8480 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x1e8480"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x1e8480 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x2dc6c0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0x2dc6c0"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0x2dc6c0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.454 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x14"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[frmw]=0x14 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1,000,204,886,016 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="1,000,204,886,016"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=1,000,204,886,016 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:18:18.455 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:18:18.456 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n mp:12.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:12.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:12.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n - ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@18 -- # shift 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:18:18.457 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x74706db0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x74706db0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x74706db0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x74706db0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x74706db0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x74706db0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x74706db0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x74706db0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x74706db0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="1"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=1 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[flbas]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[mc]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[dpc]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1,000,204,886,016 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="1,000,204,886,016"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=1,000,204,886,016 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[mssrl]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[mcl]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[msrc]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:18:18.720 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 01000000492a00005cd2e467bed34d51 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="01000000492a00005cd2e467bed34d51"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[nguid]=01000000492a00005cd2e467bed34d51 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 5cd2e467bed35239 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="5cd2e467bed35239"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[eui64]=5cd2e467bed35239 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@18 -- # shift 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x74706db0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x74706db0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x74706db0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x74706db0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x74706db0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x74706db0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0x74706db0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x74706db0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x74706db0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:18:18.721 23:59:56 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.721 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 1,000,204,886,016 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="1,000,204,886,016"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=1,000,204,886,016 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 01000000492a00005cd2e467bed34d51 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="01000000492a00005cd2e467bed34d51"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[nguid]=01000000492a00005cd2e467bed34d51 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n 5cd2e467bed35239 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="5cd2e467bed35239"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[eui64]=5cd2e467bed35239 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # IFS=: 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@21 -- # read -r reg val 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:84:00.0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@22 -- # get_nvme_with_ns_management 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@155 -- # local _ctrls 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@157 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@157 -- # get_nvmes_with_ns_management 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@144 -- # (( 1 == 0 )) 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@146 -- # local ctrl 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}" 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@122 -- # local -A bits 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@125 -- # bits["ss/sr"]=1 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@126 -- # bits["fnvme"]=2 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@127 -- # bits["fc/fi"]=4 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@128 -- # bits["nsmgt"]=8 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@129 -- # bits["self-test"]=16 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@130 -- # bits["directives"]=32 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@132 -- # bits["virtmgt"]=128 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@134 -- # bits["getlba"]=512 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@137 -- # bit=nsmgt 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@138 -- # [[ -n 8 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@75 -- # [[ -n 0xe ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@76 -- # echo 0xe 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] )) 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@148 -- # echo nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@151 -- # return 0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@158 -- # (( 1 > 0 )) 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@159 -- # echo nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@160 -- # return 0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@22 -- # nvme_name=nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@27 -- # sel_cmd=() 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@29 -- # get_oncs nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@75 -- # [[ -n 0x6 ]] 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- nvme/functions.sh@76 -- # echo 0x6 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@29 -- # (( 0x6 & 1 << 4 )) 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@33 -- # ctrlr=/dev/nvme0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@34 -- # ns=/dev/nvme0n1 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@35 -- # bdf=0000:84:00.0 00:18:18.722 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@37 -- # waitforblk nvme0n1 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1239 -- # local i=0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1250 -- # return 0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # grep oacs 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # cut -d: -f2 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@39 -- # oacs=' 0xe' 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@40 -- # oacs_firmware=4 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/nvme0n1 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@43 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@44 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/nvme0n1 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@46 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@47 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/nvme0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@48 -- # '[' 4 -ne 0 ']' 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@49 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/nvme0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@51 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/nvme0 00:18:18.723 Smart Log for NVME device:nvme0 namespace-id:ffffffff 00:18:18.723 critical_warning : 0 00:18:18.723 temperature : 40 °C (313 K) 00:18:18.723 available_spare : 100% 00:18:18.723 available_spare_threshold : 10% 00:18:18.723 percentage_used : 19% 00:18:18.723 endurance group critical warning summary: 0 00:18:18.723 Data Units Read : 350,166,788 (179.29 TB) 00:18:18.723 Data Units Written : 512,073,379 (262.18 TB) 00:18:18.723 host_read_commands : 15,704,181,041 00:18:18.723 host_write_commands : 20,945,862,148 00:18:18.723 controller_busy_time : 3,191 00:18:18.723 power_cycles : 859 00:18:18.723 power_on_hours : 40,904 00:18:18.723 unsafe_shutdowns : 736 00:18:18.723 media_errors : 0 00:18:18.723 num_err_log_entries : 3,858 00:18:18.723 Warning Temperature Time : 377 00:18:18.723 Critical Composite Temperature Time : 0 00:18:18.723 Thermal Management T1 Trans Count : 0 00:18:18.723 Thermal Management T2 Trans Count : 0 00:18:18.723 Thermal Management T1 Total Time : 0 00:18:18.723 Thermal Management T2 Total Time : 0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@52 -- # /usr/local/src/nvme-cli/nvme error-log /dev/nvme0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@53 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/nvme0 -f 1 -l 100 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@54 -- # /usr/local/src/nvme-cli/nvme get-log /dev/nvme0 -i 1 -l 100 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@59 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/nvme0 -n 1 -f 2 -v 0 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@59 -- # true 00:18:18.723 23:59:57 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:18:20.153 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:20.153 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:20.153 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:21.150 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@64 -- # spdk_tgt_pid=518940 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@65 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@67 -- # waitforlisten 518940 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@835 -- # '[' -z 518940 ']' 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.150 23:59:59 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:21.150 [2024-12-09 23:59:59.570984] Starting SPDK v25.01-pre git sha1 1ae735a5d / DPDK 24.03.0 initialization... 00:18:21.150 [2024-12-09 23:59:59.571169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518940 ] 00:18:21.410 [2024-12-09 23:59:59.710844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:21.410 [2024-12-09 23:59:59.808990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.410 [2024-12-09 23:59:59.808994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.669 [2024-12-10 00:00:00.050922] 'OCF_Core' volume operations registered 00:18:21.669 [2024-12-10 00:00:00.050974] 'OCF_Cache' volume operations registered 00:18:21.669 [2024-12-10 00:00:00.056632] 'OCF Composite' volume operations registered 00:18:21.669 [2024-12-10 00:00:00.061743] 'SPDK_block_device' volume operations registered 00:18:21.928 00:00:00 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.928 00:00:00 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@868 -- # return 0 00:18:21.928 00:00:00 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:18:25.231 Nvme0n1 00:18:25.231 00:00:03 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@70 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:18:25.489 [2024-12-10 00:00:04.007467] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:18:25.489 [2024-12-10 00:00:04.007507] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:18:25.489 [2024-12-10 00:00:04.007641] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:18:25.489 [2024-12-10 00:00:04.007682] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@72 -- # ctrlr=/dev/spdk/nvme0 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@73 -- # ns=/dev/spdk/nvme0n1 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@74 -- # waitforfile /dev/spdk/nvme0n1 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1269 -- # local i=0 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1270 -- # '[' '!' -e /dev/spdk/nvme0n1 ']' 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1276 -- # '[' '!' -e /dev/spdk/nvme0n1 ']' 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1280 -- # return 0 00:18:25.749 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:18:26.008 [ 00:18:26.008 { 00:18:26.008 "name": "Nvme0n1", 00:18:26.008 "aliases": [ 00:18:26.008 "201a7602-dd3b-4080-b574-c7127721e4f7" 00:18:26.008 ], 00:18:26.008 "product_name": "NVMe disk", 00:18:26.008 "block_size": 512, 00:18:26.008 "num_blocks": 1953525168, 00:18:26.008 "uuid": "201a7602-dd3b-4080-b574-c7127721e4f7", 00:18:26.008 "numa_id": 1, 00:18:26.008 "assigned_rate_limits": { 00:18:26.008 "rw_ios_per_sec": 0, 00:18:26.008 "rw_mbytes_per_sec": 0, 00:18:26.008 "r_mbytes_per_sec": 0, 00:18:26.008 "w_mbytes_per_sec": 0 00:18:26.008 }, 00:18:26.008 "claimed": false, 00:18:26.008 "zoned": false, 00:18:26.008 "supported_io_types": { 00:18:26.008 "read": true, 00:18:26.008 "write": true, 00:18:26.008 "unmap": true, 00:18:26.008 "flush": true, 00:18:26.008 "reset": true, 00:18:26.008 "nvme_admin": true, 00:18:26.008 "nvme_io": true, 00:18:26.008 "nvme_io_md": false, 00:18:26.008 "write_zeroes": true, 00:18:26.008 "zcopy": false, 00:18:26.008 "get_zone_info": false, 00:18:26.008 "zone_management": false, 00:18:26.008 "zone_append": false, 00:18:26.008 "compare": false, 00:18:26.008 "compare_and_write": false, 00:18:26.008 "abort": true, 00:18:26.008 "seek_hole": false, 00:18:26.008 "seek_data": false, 00:18:26.008 "copy": false, 00:18:26.008 "nvme_iov_md": false 00:18:26.008 }, 00:18:26.008 "driver_specific": { 00:18:26.008 "nvme": [ 00:18:26.008 { 00:18:26.008 "pci_address": "0000:84:00.0", 00:18:26.008 "trid": { 00:18:26.008 "trtype": "PCIe", 00:18:26.008 "traddr": "0000:84:00.0" 00:18:26.008 }, 00:18:26.008 "cuse_device": "spdk/nvme0n1", 00:18:26.008 "ctrlr_data": { 00:18:26.008 "cntlid": 0, 00:18:26.008 "vendor_id": "0x8086", 00:18:26.008 "model_number": "INTEL SSDPE2KX010T8", 00:18:26.008 "serial_number": "BTLJ724400Z71P0FGN", 00:18:26.008 "firmware_revision": "VDV10184", 00:18:26.008 "oacs": { 00:18:26.008 "security": 0, 00:18:26.008 "format": 1, 00:18:26.008 "firmware": 1, 00:18:26.008 "ns_manage": 1 00:18:26.008 }, 00:18:26.008 "multi_ctrlr": false, 00:18:26.008 "ana_reporting": false 00:18:26.008 }, 00:18:26.008 "vs": { 00:18:26.008 "nvme_version": "1.2" 00:18:26.008 }, 00:18:26.008 "ns_data": { 00:18:26.008 "id": 1, 00:18:26.008 "can_share": false 00:18:26.008 } 00:18:26.008 } 00:18:26.008 ], 00:18:26.008 "mp_policy": "active_passive" 00:18:26.008 } 00:18:26.008 } 00:18:26.008 ] 00:18:26.008 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:18:26.266 [ 00:18:26.266 { 00:18:26.266 "name": "Nvme0", 00:18:26.266 "ctrlrs": [ 00:18:26.266 { 00:18:26.266 "state": "enabled", 00:18:26.266 "cuse_device": "spdk/nvme0", 00:18:26.266 "trid": { 00:18:26.266 "trtype": "PCIe", 00:18:26.266 "traddr": "0000:84:00.0" 00:18:26.266 }, 00:18:26.266 "cntlid": 0, 00:18:26.266 "host": { 00:18:26.266 "nqn": "nqn.2014-08.org.nvmexpress:uuid:64a9f980-e6f2-4005-b27d-cdfadf5fbf86", 00:18:26.266 "addr": "", 00:18:26.266 "svcid": "" 00:18:26.266 }, 00:18:26.266 "numa_id": 1 00:18:26.266 } 00:18:26.266 ] 00:18:26.266 } 00:18:26.266 ] 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@79 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/spdk/nvme0n1 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@80 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/spdk/nvme0n1 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@81 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/spdk/nvme0n1 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@83 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/spdk/nvme0 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@84 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/spdk/nvme0 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@85 -- # '[' 4 -ne 0 ']' 00:18:26.266 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@86 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/spdk/nvme0 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@88 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/spdk/nvme0 00:18:26.526 Smart Log for NVME device:nvme0 namespace-id:ffffffff 00:18:26.526 critical_warning : 0 00:18:26.526 temperature : 39 °C (312 K) 00:18:26.526 available_spare : 100% 00:18:26.526 available_spare_threshold : 10% 00:18:26.526 percentage_used : 19% 00:18:26.526 endurance group critical warning summary: 0 00:18:26.526 Data Units Read : 350,166,790 (179.29 TB) 00:18:26.526 Data Units Written : 512,073,379 (262.18 TB) 00:18:26.526 host_read_commands : 15,704,181,094 00:18:26.526 host_write_commands : 20,945,862,148 00:18:26.526 controller_busy_time : 3,191 00:18:26.526 power_cycles : 859 00:18:26.526 power_on_hours : 40,904 00:18:26.526 unsafe_shutdowns : 736 00:18:26.526 media_errors : 0 00:18:26.526 num_err_log_entries : 3,859 00:18:26.526 Warning Temperature Time : 377 00:18:26.526 Critical Composite Temperature Time : 0 00:18:26.526 Thermal Management T1 Trans Count : 0 00:18:26.526 Thermal Management T2 Trans Count : 0 00:18:26.526 Thermal Management T1 Total Time : 0 00:18:26.526 Thermal Management T2 Total Time : 0 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@89 -- # /usr/local/src/nvme-cli/nvme error-log /dev/spdk/nvme0 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@90 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/spdk/nvme0 -f 1 -l 100 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@91 -- # /usr/local/src/nvme-cli/nvme get-log /dev/spdk/nvme0 -i 1 -l 100 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@92 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:18:26.526 [2024-12-10 00:00:04.910207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:84:00.0, 0] resetting controller 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@93 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/spdk/nvme0 -n 1 -f 2 -v 0 00:18:26.526 [2024-12-10 00:00:04.930262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:186 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:26.526 [2024-12-10 00:00:04.930290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT NAMESPACE SPECIFIC (01/0f) qid:0 cid:186 cdw0:0 sqhd:000d p:1 m:0 dnr:1 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@93 -- # true 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 ']' 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 00:18:26.526 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 00:18:26.526 4a5,22 00:18:26.526 > error_count : 3857 00:18:26.526 > sqid : 0 00:18:26.526 > cmdid : 0xffff 00:18:26.526 > status_field : 0x6006(Internal Error: The command was not completed successfully due to an internal error) 00:18:26.526 > phase_tag : 0 00:18:26.526 > parm_err_loc : 0xffff 00:18:26.526 > lba : 0 00:18:26.526 > nsid : 0xffffffff 00:18:26.526 > vs : 0 00:18:26.526 > trtype : The transport type is not indicated or the error is not transport related. 00:18:26.526 > csi : 0 00:18:26.526 > opcode : 0 00:18:26.526 > cs : 0 00:18:26.526 > trtype_spec_info: 0 00:18:26.526 > log_page_version: 0 00:18:26.526 > ................. 00:18:26.526 > Entry[ 1] 00:18:26.526 > ................. 00:18:26.526 21c39 00:18:26.526 < Entry[ 1] 00:18:26.526 --- 00:18:26.526 > Entry[ 2] 00:18:26.526 39c57 00:18:26.526 < Entry[ 2] 00:18:26.526 --- 00:18:26.526 > Entry[ 3] 00:18:26.527 57c75 00:18:26.527 < Entry[ 3] 00:18:26.527 --- 00:18:26.527 > Entry[ 4] 00:18:26.527 75c93 00:18:26.527 < Entry[ 4] 00:18:26.527 --- 00:18:26.527 > Entry[ 5] 00:18:26.527 93c111 00:18:26.527 < Entry[ 5] 00:18:26.527 --- 00:18:26.527 > Entry[ 6] 00:18:26.527 111c129 00:18:26.527 < Entry[ 6] 00:18:26.527 --- 00:18:26.527 > Entry[ 7] 00:18:26.527 129c147 00:18:26.527 < Entry[ 7] 00:18:26.527 --- 00:18:26.527 > Entry[ 8] 00:18:26.527 147c165 00:18:26.527 < Entry[ 8] 00:18:26.527 --- 00:18:26.527 > Entry[ 9] 00:18:26.527 165c183 00:18:26.527 < Entry[ 9] 00:18:26.527 --- 00:18:26.527 > Entry[10] 00:18:26.527 183c201 00:18:26.527 < Entry[10] 00:18:26.527 --- 00:18:26.527 > Entry[11] 00:18:26.527 201c219 00:18:26.527 < Entry[11] 00:18:26.527 --- 00:18:26.527 > Entry[12] 00:18:26.527 219c237 00:18:26.527 < Entry[12] 00:18:26.527 --- 00:18:26.527 > Entry[13] 00:18:26.527 237c255 00:18:26.527 < Entry[13] 00:18:26.527 --- 00:18:26.527 > Entry[14] 00:18:26.527 255c273 00:18:26.527 < Entry[14] 00:18:26.527 --- 00:18:26.527 > Entry[15] 00:18:26.527 273c291 00:18:26.527 < Entry[15] 00:18:26.527 --- 00:18:26.527 > Entry[16] 00:18:26.527 291c309 00:18:26.527 < Entry[16] 00:18:26.527 --- 00:18:26.527 > Entry[17] 00:18:26.527 309c327 00:18:26.527 < Entry[17] 00:18:26.527 --- 00:18:26.527 > Entry[18] 00:18:26.527 327c345 00:18:26.527 < Entry[18] 00:18:26.527 --- 00:18:26.527 > Entry[19] 00:18:26.527 345c363 00:18:26.527 < Entry[19] 00:18:26.527 --- 00:18:26.527 > Entry[20] 00:18:26.527 363c381 00:18:26.527 < Entry[20] 00:18:26.527 --- 00:18:26.527 > Entry[21] 00:18:26.527 381c399 00:18:26.527 < Entry[21] 00:18:26.527 --- 00:18:26.527 > Entry[22] 00:18:26.527 399c417 00:18:26.527 < Entry[22] 00:18:26.527 --- 00:18:26.527 > Entry[23] 00:18:26.527 417c435 00:18:26.527 < Entry[23] 00:18:26.527 --- 00:18:26.527 > Entry[24] 00:18:26.527 435c453 00:18:26.527 < Entry[24] 00:18:26.527 --- 00:18:26.527 > Entry[25] 00:18:26.527 453c471 00:18:26.527 < Entry[25] 00:18:26.527 --- 00:18:26.527 > Entry[26] 00:18:26.527 471c489 00:18:26.527 < Entry[26] 00:18:26.527 --- 00:18:26.527 > Entry[27] 00:18:26.527 489c507 00:18:26.527 < Entry[27] 00:18:26.527 --- 00:18:26.527 > Entry[28] 00:18:26.527 507c525 00:18:26.527 < Entry[28] 00:18:26.527 --- 00:18:26.527 > Entry[29] 00:18:26.527 525c543 00:18:26.527 < Entry[29] 00:18:26.527 --- 00:18:26.527 > Entry[30] 00:18:26.527 543c561 00:18:26.527 < Entry[30] 00:18:26.527 --- 00:18:26.527 > Entry[31] 00:18:26.527 561c579 00:18:26.527 < Entry[31] 00:18:26.527 --- 00:18:26.527 > Entry[32] 00:18:26.527 579c597 00:18:26.527 < Entry[32] 00:18:26.527 --- 00:18:26.527 > Entry[33] 00:18:26.527 597c615 00:18:26.527 < Entry[33] 00:18:26.527 --- 00:18:26.527 > Entry[34] 00:18:26.527 615c633 00:18:26.527 < Entry[34] 00:18:26.527 --- 00:18:26.527 > Entry[35] 00:18:26.527 633c651 00:18:26.527 < Entry[35] 00:18:26.527 --- 00:18:26.527 > Entry[36] 00:18:26.527 651c669 00:18:26.527 < Entry[36] 00:18:26.527 --- 00:18:26.527 > Entry[37] 00:18:26.527 669c687 00:18:26.527 < Entry[37] 00:18:26.527 --- 00:18:26.527 > Entry[38] 00:18:26.527 687c705 00:18:26.527 < Entry[38] 00:18:26.527 --- 00:18:26.527 > Entry[39] 00:18:26.527 705c723 00:18:26.527 < Entry[39] 00:18:26.527 --- 00:18:26.527 > Entry[40] 00:18:26.527 723c741 00:18:26.527 < Entry[40] 00:18:26.527 --- 00:18:26.527 > Entry[41] 00:18:26.527 741c759 00:18:26.527 < Entry[41] 00:18:26.527 --- 00:18:26.527 > Entry[42] 00:18:26.527 759c777 00:18:26.527 < Entry[42] 00:18:26.527 --- 00:18:26.527 > Entry[43] 00:18:26.527 777c795 00:18:26.527 < Entry[43] 00:18:26.527 --- 00:18:26.527 > Entry[44] 00:18:26.527 795c813 00:18:26.527 < Entry[44] 00:18:26.527 --- 00:18:26.527 > Entry[45] 00:18:26.527 813c831 00:18:26.527 < Entry[45] 00:18:26.527 --- 00:18:26.527 > Entry[46] 00:18:26.527 831c849 00:18:26.527 < Entry[46] 00:18:26.527 --- 00:18:26.527 > Entry[47] 00:18:26.527 849c867 00:18:26.527 < Entry[47] 00:18:26.527 --- 00:18:26.527 > Entry[48] 00:18:26.527 867c885 00:18:26.527 < Entry[48] 00:18:26.527 --- 00:18:26.527 > Entry[49] 00:18:26.527 885c903 00:18:26.527 < Entry[49] 00:18:26.527 --- 00:18:26.527 > Entry[50] 00:18:26.527 903c921 00:18:26.527 < Entry[50] 00:18:26.527 --- 00:18:26.527 > Entry[51] 00:18:26.527 921c939 00:18:26.527 < Entry[51] 00:18:26.527 --- 00:18:26.527 > Entry[52] 00:18:26.527 939c957 00:18:26.527 < Entry[52] 00:18:26.527 --- 00:18:26.527 > Entry[53] 00:18:26.527 957c975 00:18:26.527 < Entry[53] 00:18:26.527 --- 00:18:26.527 > Entry[54] 00:18:26.527 975c993 00:18:26.527 < Entry[54] 00:18:26.527 --- 00:18:26.527 > Entry[55] 00:18:26.527 993c1011 00:18:26.527 < Entry[55] 00:18:26.527 --- 00:18:26.527 > Entry[56] 00:18:26.527 1011c1029 00:18:26.527 < Entry[56] 00:18:26.527 --- 00:18:26.527 > Entry[57] 00:18:26.527 1029c1047 00:18:26.527 < Entry[57] 00:18:26.527 --- 00:18:26.527 > Entry[58] 00:18:26.527 1047c1065 00:18:26.527 < Entry[58] 00:18:26.527 --- 00:18:26.527 > Entry[59] 00:18:26.527 1065c1083 00:18:26.527 < Entry[59] 00:18:26.527 --- 00:18:26.527 > Entry[60] 00:18:26.527 1083c1101 00:18:26.527 < Entry[60] 00:18:26.527 --- 00:18:26.527 > Entry[61] 00:18:26.527 1101c1119 00:18:26.527 < Entry[61] 00:18:26.527 --- 00:18:26.527 > Entry[62] 00:18:26.527 1119c1137 00:18:26.527 < Entry[62] 00:18:26.527 --- 00:18:26.527 > Entry[63] 00:18:26.527 1123,1140d1140 00:18:26.527 < cmdid : 0xffff 00:18:26.527 < status_field : 0x6006(Internal Error: The command was not completed successfully due to an internal error) 00:18:26.527 < phase_tag : 0 00:18:26.527 < parm_err_loc : 0xffff 00:18:26.527 < lba : 0 00:18:26.527 < nsid : 0xffffffff 00:18:26.527 < vs : 0 00:18:26.527 < trtype : The transport type is not indicated or the error is not transport related. 00:18:26.527 < csi : 0 00:18:26.527 < opcode : 0 00:18:26.527 < cs : 0 00:18:26.527 < trtype_spec_info: 0 00:18:26.527 < log_page_version: 0 00:18:26.527 < ................. 00:18:26.527 < Entry[63] 00:18:26.527 < ................. 00:18:26.527 < error_count : 3793 00:18:26.527 < sqid : 2 00:18:26.527 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # trap - ERR 00:18:26.527 00:00:04 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@98 -- # print_backtrace 00:18:26.527 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:18:26.527 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1159 -- # args=() 00:18:26.528 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1159 -- # local args 00:18:26.528 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1161 -- # xtrace_disable 00:18:26.528 00:00:04 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 ========== Backtrace start: ========== 00:18:26.528 00:18:26.528 in /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh:98 -> main([]) 00:18:26.528 ... 00:18:26.528 93 ${NVME_CMD} set-feature $ctrlr -n 1 -f 2 -v 0 2> ${CUSE_OUT}.11 || true 00:18:26.528 94 00:18:26.528 95 for i in {1..11}; do 00:18:26.528 96 if [ -f "${KERNEL_OUT}.${i}" ] && [ -f "${CUSE_OUT}.${i}" ]; then 00:18:26.528 97 sed -i "s/${nvme_name}/nvme0/g" ${KERNEL_OUT}.${i} 00:18:26.528 => 98 diff --suppress-common-lines ${KERNEL_OUT}.${i} ${CUSE_OUT}.${i} 00:18:26.528 99 fi 00:18:26.528 100 done 00:18:26.528 101 00:18:26.528 102 rm -Rf $testdir/match_files 00:18:26.528 103 00:18:26.528 ... 00:18:26.528 00:18:26.528 ========== Backtrace end ========== 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1198 -- # return 0 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@1 -- # kill -9 518940 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- cuse/spdk_nvme_cli_cuse.sh@1 -- # exit 1 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1129 -- # trap - ERR 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1129 -- # print_backtrace 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1159 -- # args=('/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh' 'nvme_cli_cuse') 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1159 -- # local args 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1161 -- # xtrace_disable 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 ========== Backtrace start: ========== 00:18:26.528 00:18:26.528 in /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvme_cli_cuse"],["/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh"]) 00:18:26.528 ... 00:18:26.528 1124 timing_enter $test_name 00:18:26.528 1125 echo "************************************" 00:18:26.528 1126 echo "START TEST $test_name" 00:18:26.528 1127 echo "************************************" 00:18:26.528 1128 xtrace_restore 00:18:26.528 1129 time "$@" 00:18:26.528 1130 xtrace_disable 00:18:26.528 1131 echo "************************************" 00:18:26.528 1132 echo "END TEST $test_name" 00:18:26.528 1133 echo "************************************" 00:18:26.528 1134 timing_exit $test_name 00:18:26.528 ... 00:18:26.528 in /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh:19 -> main([]) 00:18:26.528 ... 00:18:26.528 14 fi 00:18:26.528 15 00:18:26.528 16 modprobe cuse 00:18:26.528 17 run_test "nvme_cuse_app" $testdir/cuse 00:18:26.528 18 run_test "nvme_cuse_rpc" $testdir/nvme_cuse_rpc.sh 00:18:26.528 => 19 run_test "nvme_cli_cuse" $testdir/spdk_nvme_cli_cuse.sh 00:18:26.528 20 run_test "nvme_cli_plugin" $testdir/spdk_nvme_cli_plugin.sh 00:18:26.528 21 run_test "nvme_smartctl_cuse" $testdir/spdk_smartctl_cuse.sh 00:18:26.528 22 run_test "nvme_ns_manage_cuse" $testdir/nvme_ns_manage_cuse.sh 00:18:26.528 23 rmmod cuse 00:18:26.528 24 00:18:26.528 ... 00:18:26.528 00:18:26.528 ========== Backtrace end ========== 00:18:26.528 00:00:05 nvme_cuse.nvme_cli_cuse -- common/autotest_common.sh@1198 -- # return 0 00:18:26.528 00:18:26.528 real 0m10.758s 00:18:26.528 user 0m3.189s 00:18:26.528 sys 0m2.514s 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1129 -- # trap - ERR 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1129 -- # print_backtrace 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1159 -- # args=('/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh' 'nvme_cuse' '/var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf') 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1159 -- # local args 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1161 -- # xtrace_disable 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 ========== Backtrace start: ========== 00:18:26.528 00:18:26.528 in /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvme_cuse"],["/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh"]) 00:18:26.528 ... 00:18:26.528 1124 timing_enter $test_name 00:18:26.528 1125 echo "************************************" 00:18:26.528 1126 echo "START TEST $test_name" 00:18:26.528 1127 echo "************************************" 00:18:26.528 1128 xtrace_restore 00:18:26.528 1129 time "$@" 00:18:26.528 1130 xtrace_disable 00:18:26.528 1131 echo "************************************" 00:18:26.528 1132 echo "END TEST $test_name" 00:18:26.528 1133 echo "************************************" 00:18:26.528 1134 timing_exit $test_name 00:18:26.528 ... 00:18:26.528 in /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh:223 -> main(["/var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf"]) 00:18:26.528 ... 00:18:26.528 218 00:18:26.528 219 if [[ $SPDK_TEST_NVME_BP -eq 1 ]]; then 00:18:26.528 220 run_test "nvme_bp" $rootdir/test/nvme/nvme_bp.sh 00:18:26.528 221 fi 00:18:26.528 222 if [[ $SPDK_TEST_NVME_CUSE -eq 1 ]]; then 00:18:26.528 => 223 run_test "nvme_cuse" $rootdir/test/nvme/cuse/nvme_cuse.sh 00:18:26.528 224 fi 00:18:26.528 225 if [[ $SPDK_TEST_NVME_CMB -eq 1 ]]; then 00:18:26.528 226 run_test "nvme_cmb" $rootdir/test/nvme/cmb/cmb.sh 00:18:26.528 227 fi 00:18:26.528 228 if [[ $SPDK_TEST_NVME_FDP -eq 1 ]]; then 00:18:26.528 ... 00:18:26.528 00:18:26.528 ========== Backtrace end ========== 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1198 -- # return 0 00:18:26.528 00:18:26.528 real 0m29.534s 00:18:26.528 user 0m39.728s 00:18:26.528 sys 0m4.185s 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1 -- # autotest_cleanup 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1396 -- # local autotest_es=1 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:26.528 00:00:05 nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:18:41.413 INFO: APP EXITING 00:18:41.413 INFO: killing all VMs 00:18:41.413 INFO: killing vhost app 00:18:41.413 INFO: EXIT DONE 00:18:41.413 Waiting for block devices as requested 00:18:41.413 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:18:41.413 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:41.413 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:41.413 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:41.671 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:41.671 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:41.671 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:41.930 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:41.930 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:41.930 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:41.930 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:43.834 Cleaning 00:18:43.834 Removing: /var/run/dpdk/spdk0/config 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:18:43.834 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:43.834 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:43.834 Removing: /dev/shm/spdk_tgt_trace.pid465258 00:18:43.834 Removing: /dev/shm/spdk_tgt_trace.pid518940 00:18:43.834 Removing: /var/tmp/spdk_pci_lock_0000:84:00.0 00:18:43.834 Removing: /var/tmp/spdk_cpu_lock_000 00:18:43.834 Removing: /var/tmp/spdk_cpu_lock_001 00:18:43.834 Removing: /var/run/dpdk/spdk0 00:18:43.834 Removing: /var/run/dpdk/spdk_pid463465 00:18:43.834 Removing: /var/run/dpdk/spdk_pid464327 00:18:43.834 Removing: /var/run/dpdk/spdk_pid465258 00:18:43.834 Removing: /var/run/dpdk/spdk_pid465829 00:18:43.834 Removing: /var/run/dpdk/spdk_pid466512 00:18:43.834 Removing: /var/run/dpdk/spdk_pid466662 00:18:43.834 Removing: /var/run/dpdk/spdk_pid467481 00:18:43.834 Removing: /var/run/dpdk/spdk_pid467506 00:18:43.834 Removing: /var/run/dpdk/spdk_pid467933 00:18:43.834 Removing: /var/run/dpdk/spdk_pid468254 00:18:43.834 Removing: /var/run/dpdk/spdk_pid468583 00:18:43.834 Removing: /var/run/dpdk/spdk_pid468920 00:18:43.834 Removing: /var/run/dpdk/spdk_pid469240 00:18:43.834 Removing: /var/run/dpdk/spdk_pid469397 00:18:43.834 Removing: /var/run/dpdk/spdk_pid469554 00:18:43.834 Removing: /var/run/dpdk/spdk_pid469800 00:18:43.834 Removing: /var/run/dpdk/spdk_pid470047 00:18:43.834 Removing: /var/run/dpdk/spdk_pid473773 00:18:43.834 Removing: /var/run/dpdk/spdk_pid474073 00:18:43.834 Removing: /var/run/dpdk/spdk_pid474360 00:18:43.834 Removing: /var/run/dpdk/spdk_pid474378 00:18:43.834 Removing: /var/run/dpdk/spdk_pid475063 00:18:43.834 Removing: /var/run/dpdk/spdk_pid475084 00:18:43.834 Removing: /var/run/dpdk/spdk_pid475893 00:18:43.834 Removing: /var/run/dpdk/spdk_pid475906 00:18:43.834 Removing: /var/run/dpdk/spdk_pid476322 00:18:43.834 Removing: /var/run/dpdk/spdk_pid476333 00:18:43.834 Removing: /var/run/dpdk/spdk_pid476618 00:18:43.834 Removing: /var/run/dpdk/spdk_pid476754 00:18:43.834 Removing: /var/run/dpdk/spdk_pid477257 00:18:43.834 Removing: /var/run/dpdk/spdk_pid477515 00:18:43.834 Removing: /var/run/dpdk/spdk_pid477732 00:18:43.834 Removing: /var/run/dpdk/spdk_pid478186 00:18:43.834 Removing: /var/run/dpdk/spdk_pid478350 00:18:43.834 Removing: /var/run/dpdk/spdk_pid478549 00:18:43.834 Removing: /var/run/dpdk/spdk_pid478841 00:18:43.834 Removing: /var/run/dpdk/spdk_pid479119 00:18:43.834 Removing: /var/run/dpdk/spdk_pid479388 00:18:43.834 Removing: /var/run/dpdk/spdk_pid479588 00:18:43.834 Removing: /var/run/dpdk/spdk_pid479862 00:18:43.834 Removing: /var/run/dpdk/spdk_pid480153 00:18:43.834 Removing: /var/run/dpdk/spdk_pid480439 00:18:43.834 Removing: /var/run/dpdk/spdk_pid480716 00:18:43.834 Removing: /var/run/dpdk/spdk_pid480926 00:18:43.834 Removing: /var/run/dpdk/spdk_pid481168 00:18:43.834 Removing: /var/run/dpdk/spdk_pid481444 00:18:43.834 Removing: /var/run/dpdk/spdk_pid481724 00:18:43.834 Removing: /var/run/dpdk/spdk_pid482009 00:18:43.834 Removing: /var/run/dpdk/spdk_pid482287 00:18:43.834 Removing: /var/run/dpdk/spdk_pid482483 00:18:43.834 Removing: /var/run/dpdk/spdk_pid482754 00:18:43.834 Removing: /var/run/dpdk/spdk_pid483042 00:18:43.834 Removing: /var/run/dpdk/spdk_pid483332 00:18:43.834 Removing: /var/run/dpdk/spdk_pid483623 00:18:43.834 Removing: /var/run/dpdk/spdk_pid483795 00:18:43.834 Removing: /var/run/dpdk/spdk_pid484072 00:18:43.834 Removing: /var/run/dpdk/spdk_pid484353 00:18:43.834 Removing: /var/run/dpdk/spdk_pid484702 00:18:43.834 Removing: /var/run/dpdk/spdk_pid485036 00:18:43.834 Removing: /var/run/dpdk/spdk_pid485563 00:18:43.834 Removing: /var/run/dpdk/spdk_pid486258 00:18:43.834 Removing: /var/run/dpdk/spdk_pid486926 00:18:43.834 Removing: /var/run/dpdk/spdk_pid489297 00:18:43.834 Removing: /var/run/dpdk/spdk_pid490493 00:18:43.834 Removing: /var/run/dpdk/spdk_pid491651 00:18:43.834 Removing: /var/run/dpdk/spdk_pid492374 00:18:43.834 Removing: /var/run/dpdk/spdk_pid492423 00:18:43.834 Removing: /var/run/dpdk/spdk_pid492598 00:18:43.834 Removing: /var/run/dpdk/spdk_pid495357 00:18:43.834 Removing: /var/run/dpdk/spdk_pid496517 00:18:43.834 Removing: /var/run/dpdk/spdk_pid499034 00:18:43.834 Removing: /var/run/dpdk/spdk_pid500109 00:18:43.834 Removing: /var/run/dpdk/spdk_pid501297 00:18:43.834 Removing: /var/run/dpdk/spdk_pid502097 00:18:43.834 Removing: /var/run/dpdk/spdk_pid502121 00:18:43.834 Removing: /var/run/dpdk/spdk_pid502218 00:18:43.834 Removing: /var/run/dpdk/spdk_pid512089 00:18:43.834 Removing: /var/run/dpdk/spdk_pid513075 00:18:43.834 Removing: /var/run/dpdk/spdk_pid513472 00:18:43.834 Removing: /var/run/dpdk/spdk_pid514003 00:18:43.834 Removing: /var/run/dpdk/spdk_pid515182 00:18:43.834 Removing: /var/run/dpdk/spdk_pid518940 00:18:43.834 Clean 00:18:45.738 00:00:24 nvme_cuse -- common/autotest_common.sh@1453 -- # return 1 00:18:45.738 00:00:24 nvme_cuse -- common/autotest_common.sh@1 -- # : 00:18:45.738 00:00:24 nvme_cuse -- common/autotest_common.sh@1 -- # exit 1 00:18:45.738 00:00:24 -- spdk/autorun.sh@27 -- $ trap - ERR 00:18:45.738 00:00:24 -- spdk/autorun.sh@27 -- $ print_backtrace 00:18:45.738 00:00:24 -- common/autotest_common.sh@1157 -- $ [[ ehxBET =~ e ]] 00:18:45.738 00:00:24 -- common/autotest_common.sh@1159 -- $ args=('/var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf') 00:18:45.738 00:00:24 -- common/autotest_common.sh@1159 -- $ local args 00:18:45.738 00:00:24 -- common/autotest_common.sh@1161 -- $ xtrace_disable 00:18:45.738 00:00:24 -- common/autotest_common.sh@10 -- $ set +x 00:18:45.738 ========== Backtrace start: ========== 00:18:45.738 00:18:45.738 in spdk/autorun.sh:27 -> main(["/var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf"]) 00:18:45.738 ... 00:18:45.739 22 trap 'timing_finish || exit 1' EXIT 00:18:45.739 23 00:18:45.739 24 # Runs agent scripts 00:18:45.739 25 $rootdir/autobuild.sh "$conf" 00:18:45.739 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:18:45.739 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:18:45.739 28 fi 00:18:45.739 ... 00:18:45.739 00:18:45.739 ========== Backtrace end ========== 00:18:45.739 00:00:24 -- common/autotest_common.sh@1198 -- $ return 0 00:18:45.739 00:00:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:45.739 00:00:24 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt ]] 00:18:45.739 00:00:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:45.739 00:00:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:45.739 00:00:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt 00:18:45.751 [Pipeline] } 00:18:45.769 [Pipeline] // stage 00:18:45.777 [Pipeline] } 00:18:45.792 [Pipeline] // timeout 00:18:45.798 [Pipeline] } 00:18:45.801 ERROR: script returned exit code 1 00:18:45.801 Setting overall build result to FAILURE 00:18:45.813 [Pipeline] // catchError 00:18:45.817 [Pipeline] } 00:18:45.830 [Pipeline] // wrap 00:18:45.836 [Pipeline] } 00:18:45.848 [Pipeline] // catchError 00:18:45.855 [Pipeline] stage 00:18:45.858 [Pipeline] { (Epilogue) 00:18:45.870 [Pipeline] catchError 00:18:45.872 [Pipeline] { 00:18:45.884 [Pipeline] echo 00:18:45.886 Cleanup processes 00:18:45.892 [Pipeline] sh 00:18:46.178 + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:18:46.178 448502 sudo -E /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784692 00:18:46.178 448553 bash /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784692 00:18:46.178 525518 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:18:46.192 [Pipeline] sh 00:18:46.480 ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:18:46.480 ++ grep -v 'sudo pgrep' 00:18:46.480 ++ awk '{print $1}' 00:18:46.480 + sudo kill -9 448502 448553 00:18:46.494 [Pipeline] sh 00:18:46.784 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:53.382 [Pipeline] sh 00:18:53.674 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:53.675 Artifacts sizes are good 00:18:53.691 [Pipeline] archiveArtifacts 00:18:53.698 Archiving artifacts 00:18:53.915 [Pipeline] sh 00:18:54.202 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvme-phy-autotest 00:18:54.219 [Pipeline] cleanWs 00:18:54.232 [WS-CLEANUP] Deleting project workspace... 00:18:54.232 [WS-CLEANUP] Deferred wipeout is used... 00:18:54.240 [WS-CLEANUP] done 00:18:54.242 [Pipeline] } 00:18:54.259 [Pipeline] // catchError 00:18:54.271 [Pipeline] echo 00:18:54.273 Tests finished with errors. Please check the logs for more info. 00:18:54.278 [Pipeline] echo 00:18:54.280 Execution node will be rebooted. 00:18:54.311 [Pipeline] build 00:18:54.315 Scheduling project: reset-job 00:18:54.327 [Pipeline] sh 00:18:54.610 + logger -p user.err -t JENKINS-CI 00:18:54.620 [Pipeline] } 00:18:54.633 [Pipeline] // stage 00:18:54.638 [Pipeline] } 00:18:54.653 [Pipeline] // node 00:18:54.659 [Pipeline] End of Pipeline 00:18:54.707 Finished: FAILURE